Introducing AI Libraries into Your Codebase the Healthy Way

16/12/20247 min read
EnglishEspañol

AI is here to stay. Large Language Models (LLMs) like ChatGPT, Claude, or Llama are quickly becoming part of the toolkit for many developers and companies. The problem? Too often, these powerful tools are thrown into existing products without much thought about architecture or maintainability.

What happens when you grab the first chatbot library you find and unleash it into your codebase? Your project can quickly become unmaintainable, difficult to evolve, and fragile when changes come. If you’ve used products that haphazardly integrate generative AI, you’ve probably experienced this firsthand.

In this article, we’ll walk through a simple but powerful example to show you how to introduce generative AI responsibly and maintain a healthy architecture. Specifically, we’ll use a motivational phrase generator for software developers (because let’s face it, we all need some motivation after diving into code). Along the way, we’ll highlight the problems that arise when you tightly couple AI libraries to business logic and demonstrate how to fix them using Inversion of Control (IoC).

Let’s get started.


The Problem: AI Coupled to Business Logic

Imagine a simple Python program to generate motivational phrases using a generative AI model. The program does the following:

  1. Instantiates an AI model using the langchain library (specifically Llama, in this case).
  2. Sends a prompt to the model to generate a motivational phrase.
  3. Returns the generated phrase.

Here’s the quick-and-dirty implementation:

from langchain.llms import Ollama

def main():
    llm = Ollama(model="llama3.2")
    prompt = "Generate a motivational phrase for software developers."
    result = llm.invoke(prompt)
    print(result)

if __name__ == "__main__":
    main()

On the surface, this code works fine. Run it, and you get something like this:

"The creation of innovative and sustainable solutions to improve people's lives is the true purpose of software development."

Sounds motivational, right? But there are serious problems lurking beneath the surface:

  1. Infrastructure and Business Logic Are Mixed: The program's intent (generate a motivational phrase) is tainted by implementation details from the Ollama model and langchain library.
    • If you decide to switch from Llama to ChatGPT or Claude, you’ll have to modify your business logic.
  2. Hard to Test: To test the function, you'd have to deal with the actual AI library, requiring monkey-patching or mocking at a low level.
  3. Poor Separation of Concerns: The code has multiple responsibilities — managing the LLM and defining the business logic for generating motivational phrases.

This approach may seem harmless in small scripts, but in complex projects, this tight coupling leads to:

  • Code that's hard to maintain or extend.
  • Infrastructure changes breaking core functionality.
  • Tests that are brittle and cumbersome.

The solution? Decouple your business logic from infrastructure using Inversion of Control.


The Solution: Inversion of Control (IoC)

Inversion of Control is a design principle where dependencies are not created or managed directly within the business logic. Instead, they are injected from the outside. This makes code more modular, testable, and easier to change.

Here’s how we’ll refactor our motivational phrase generator:

  1. Create an Abstract Service: Define an interface (or abstract class) for generating text.
  2. Inject the Implementation: Pass the LLM implementation to the business logic as a dependency.
  3. Separate Responsibilities: Let the generator worry only about defining the prompt, while the AI service handles the text generation.

Step 1: Define the Abstract Service

We’ll start by creating an abstract class for the text generation service:

from abc import ABC, abstractmethod

class LLMService(ABC):
    @abstractmethod
    def generate_text(self, prompt: str) -> str:
        pass

This abstract class defines a contract: Any implementation of LLMService must provide a generate_text method.


Step 2: Decouple the Generator

Next, we refactor the motivational phrase generator to depend on the abstract service instead of a specific LLM:

class MotivationalPhraseGenerator:
    def __init__(self, llm_service: LLMService):
        self._llm_service = llm_service

    def generate_phrase(self) -> str:
        prompt = "Generate a motivational phrase for software developers."
        return self._llm_service.generate_text(prompt)

Now, the generator is clean and focused. It only knows about the intent (generate a motivational phrase), not the implementation details of the AI model.


Step 3: Implement the Service

We implement the abstract service for the Ollama model:

from langchain.llms import Ollama

class OllamaService(LLMService):
    def __init__(self, model: str):
        self._model = Ollama(model=model)

    def generate_text(self, prompt: str) -> str:
        return self._model.invoke(prompt)

Step 4: Bring It All Together

Finally, we instantiate the generator with the specific AI service:

if __name__ == "__main__":
    service = OllamaService(model="llama3.2")
    generator = MotivationalPhraseGenerator(service)
    print(generator.generate_phrase())

Run this, and you’ll still get those beautiful motivational phrases:

"Developing software is not just creating code; it's building solutions that improve lives and change the world."


Benefits of This Refactoring

By applying Inversion of Control, we’ve:

  1. Separated Concerns: The generator focuses solely on defining the prompt. The AI service handles the text generation.

  2. Improved Maintainability: Switching to another LLM (e.g., ChatGPT) only requires implementing a new service. The generator remains untouched.

  3. Enabled Testability: We can easily mock the service for unit tests:

    class MockLLMService(LLMService):
        def generate_text(self, prompt: str) -> str:
            return "This is a mock motivational phrase."
    
    def test_generator():
        service = MockLLMService()
        generator = MotivationalPhraseGenerator(service)
        assert generator.generate_phrase() == "This is a mock motivational phrase."
    
  4. Improved Extensibility: Adding new features or LLMs doesn’t require modifying existing code.


Final Thoughts

Introducing generative AI into your projects can be exciting, but it needs to be done thoughtfully. Tight coupling of infrastructure (like LLMs) with business logic leads to messy, unmaintainable code that’s hard to scale and test.

By applying Inversion of Control and adhering to clean architecture principles, you can:

  • Keep your business logic pure and focused.
  • Test your code easily.
  • Adapt to changing technologies without headaches.

Whether you’re building motivational generators or complex AI-driven systems, this approach will help you keep your codebase healthy and ready for the future.

Happy coding, and remember:

"Every line of code counts."

Mantente al día

Suscríbete a la newsletter para enterarte de las últimas noticias!

Comparte este artículo