[go: up one dir, main page]

Skip to content

Konveyor AI - static code analysis driven migration to new targets via Generative AI

License

Notifications You must be signed in to change notification settings

konveyor/kai

Repository files navigation

Kai (Konveyor AI)


Kai (/kaɪ/, rhymes with pie) - An AI-enabled tool that simplifies the process of modernizing application source code to a new platform. It uses Large Language Models (LLMs) guided by static code analysis, along with data from Konveyor. This data provides insights into how the organization solved similar problems in the past, helping streamline and automate the code modernization process.

🔍 About The Project

Kai is an AI-enabled tool that assists with modernizing applications. Kai is designed to help developers write code more efficiently by providing suggestions and solutions to common problems. It does this by performing Retrieval Augmented Generation (RAG), working with LLMs by using Konveyor analysis reports about the codebase and generating solutions based on previously solved examples.

Now, you may be thinking: How is Kai different than other generative AI tools?

1. Kai uses Konveyor’s analysis reports

Konveyor generates analysis reports via Kantra throughout a migration. This history of reports tells you what’s wrong with your codebase, where the issues are, and when they happened. This functionality exists today, and developers are already using this data to make decisions. And because of our RAG approach, this is all possible without additional fine-tuning.

2. Kai learns throughout a migration

As you migrate more pieces of your codebase with Kai, it can learn from the data available, and get better recommendations for the next application, and the next, and so on. This shapes the code suggestions to be similar to how your organization has solved problems in the past.

3. Kai is focused on migration

LLMs are very powerful tools, but without explicit guidance, they can generate a lot of garbage. Using Konveyor’s analysis reports allows us to focus Kai’s generative power on the specific problems that need to be solved. This pointed, specific data is the key to unlocking the full potential of large language models.

🏫 Learn More

Note

Kai is in early development. We are actively working on improving the tool and adding new features. If you are interested in contributing to the project, please see our Contributor Guide.

🗺️ Roadmap and Early Builds

🛠️ Design and Architecture

🗣️ Blog Posts

📽️ Demo Video

DemoVideo

Check out our 15 minute guided demo video to see Kai in action!

🚀 Getting Started

There are two elements to Kai that is necessary for it to function: the backend and the IDE extension. The backend is responsible for connecting to your LLM service, ingesting static analysis reports, and generating solutions. The IDE extension is where you can interact with Kai, see suggestions, and apply them to your codebase.

Prerequisites

  1. Git
  2. A container engine such as podman or docker. We will provide instructions for podman.
  3. Docker-compose or Podman-compose, either one will work. For podman-compose, you can install it here.

Launch the Kai backend with sample data

Important

Kai is in early development and is not yet ready for production use. We currently recommend checking out the git tag stable for the most stable user experience.

The quickest way to get running is to leverage sample data committed into the Kai repo along with the podman compose up workflow

  1. git clone https://github.com/konveyor/kai.git
  2. cd kai
  3. git checkout stable
  4. Make sure the podman runtime is running with systemctl --user start podman
  5. Make sure the logs directory accessible to the podman container with podman unshare chown -R 1001:0 logs
    • This is necessary to allow podman to write to the logs directory outside the container.
    • Use sudo chown -R <your_user>:<your_user> logs to change the ownership of the logs directory back to your user when done.
  6. Run podman compose up.
    • The first time this is run it will take several minutes to download images and to populate sample data.
    • After the first run the DB will be populated and subsequent starts will be much faster, as long as the kai_kai_db_data volume is not deleted.
    • To clean up all resources run podman compose down && podman volume rm kai_kai_db_data.
    • This will run Kai in demo mode, which will use cached LLM responses, via setting the environment variable KAI__DEMO_MODE=true. To run without demo mode execute KAI__DEMO_MODE=false podman compose up. See docs/contrib/configuration.md for more information on demo mode.

The Kai backend is now running and ready to serve requests!

Guided Walk-through

After you have the kai backend running via podman compose up you can run through a guided scenario we have to show Kai in action at docs/scenarios/demo.md. This document walks through a guided scenario of using Kai to complete a migration of a Java EE app to Quarkus.

Other ways to run Kai

The above information is a quick path to enable running Kai quickly to see how it works. If you'd like to take a deeper dive into running Kai against data in Konveyor or your own custom data, please see docs/getting_started.md

Debugging / Troubleshooting

  • Kai backend will write logging information to the logs directory. You can adjust the level via the environment variables. For example: KAI__FILE_LOG_LEVEL="debug".
  • Tracing information is written to disk to aid deeper explorations of Prompts and LLM Results. See docs/contrib/tracing.md

🌐 Contributing

Our project welcomes contributions from any member of our community. To get started contributing, please see our Contributor Guide.

⚖️ Code of Conduct

Refer to Konveyor's Code of Conduct here.

📜 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.