“Model-Agnostic LLM Prompt Management” Prompty Getting Started Guide

A common problem when building generative AI into an application is that there is no standard way to manage prompts. Each team that builds AI into their code often uses a different approach and manages data in a different way. They are constantly reinventing the wheel, without learning from other teams and projects.

ⓒ Getty Images Bank

It is a waste of time to build a new AI interaction model for each application and to store, use, and update prompts in different ways. AI developer resources are limited, and skilled developers work across multiple projects. It is not effective to remember how each application works and how prompts should be configured and tested.

The complexity increases when using multiple AI models. A team may use custom tools based on open source models such as OpenAI’s GPT, Facebook’s Llama, or Antropic’s Claude, or build applications using local, small-scale language models such as Microsoft’s Phi.

What is ‘prompty’?

What is needed now is a model-agnostic way of handling LLM that allows experimentation with LLM within development tools without context switching. Sponsored by Microsoft PromptyThis is exactly the project for which Prompt is a Visual Studio Code extension that helps solve many problems associated with working with LLM and other generative AI tools.

Prompti is GitHubIt is an active open source project that can be found at . You can contribute code or submit requests to the development team. Prompt is Visual Studio Code MarketplaceIt is provided in and is integrated with the Visual Studio Code file system and code editor. The documentation is Project websiteYou can see it in . The documentation isn’t very rich yet, but it’s enough to get you started.

Prompty is a very intuitive tool. Its easy-to-understand format is modeled after familiar configuration languages ​​like YAML. Given that what you do with prompts is constructing generative AI, this is a reasonable approach. Think of prompts as a way to define the semantic space that the model searches to provide answers.

The heart of Prompty is describing interactions with generative AI. Domain specific languageIt is built into the Visual Studio Code extension, which leverages features such as formatting and linting, error highlighting, and language servers for code completion. It supports both Python and C# output, with support for JavaScript and TypeScript in future versions.

If you haven’t taken a closer look at the Build 2024 session content yet, An interesting session on how to use Prompty as part of an AI development platform.I highly recommend you to watch it without missing it.

Building a prompt using Prompty

How to use prompts in codeis no different than any other library. Along with the Visual Studio Code extension, you need to create an application framework with the appropriate packages. Once you have an application skeleton that can access the LLM endpoint, you can use the prompt extension to add prompt assets to your code. Right-click on the root folder of your application in the Visual Studio Code Explorer and create a new prompt. This will add a .prompty file to the folder. You can change the file name as needed.

To begin creating a prompt asset, open the .prompty file. This file is a formatted document with two sections. The first section is a detailed description of the application you are building, including details about the model you are using, the parameters your application should use, and samples of the information the model is based on. The second section contains basic system prompts that define the type of output you expect. Next is context, information provided by the user or application that uses the LLM for natural language output.

In general, you can use prompts to test your prompts and display the output in the output window of Visual Studio Code. This allows you to specify the behavior you want the LLM output to use, such as switching from an informal chat-like output to a more formal output. You should provide the appropriate environment variables, including the authentication token. As always, you should keep these variables in a separate file to avoid accidental exposure.

Using LM Orchestrator with Promptee

After you’ve written and tested your prompt, export the prompt asset data to Azure AI Studio. Prompt Flowto build standalone AI-based agents. Semantic KernelIt can be used with any LLM orchestrator of your choice, including: , which allows you to add supporting data and use prompts to create a natural language interface to external data sources, reducing the risk of inaccurate output.

The resulting function builds an interaction with the LLM that can be wrapped in an asynchronous operation using the prompt description. The result is an AI application with very little code other than assembling user input and displaying the LLM output. Most of the heavy lifting is handled by tools like the semantic kernel, and by separating the prompt definition from the application, it is possible to update the LLM interaction from outside the application using .prompty asset files.

To include prompt assets in your application, simply select an orchestrator and have it automatically generate the code to include prompts in your application. Currently supported orchestrators are limited, but since this is an open source project, you can submit additional code generators to support alternative application development toolchains.

The last part is particularly important. Prompt is currently focused on building prompts for LLMs hosted in the cloud, but we are currently moving from large models to small models, and focusing on tools like Microsoft Phi Silica, which is designed to run on personal and edge hardware, and neural network processing units on smartphones.

To deliver edge AI applications, tools like Prompt need to be part of the toolchain, working with local endpoints to generate API calls for common SDKs. It will be interesting to see whether Microsoft extends Prompt to work with the PySilicon classes it has promised to ship in the Windows App SDK as part of the Copilot runtime. This would give .NET and C++ developers the tools they need to manage local prompts and cloud-targeting prompts.

Growth of AI toolchain

Tools like this are an important part of the AI ​​application development toolchain, as they allow people with different skills to collaborate. Prompt engineers get tools to author and manage the prompts they need to deliver consistent AI applications in a way that application developers can use in their code. Visual Studio Code allows you to combine multiple extensions into a single, consistent toolchain. This approach can be better than a single AI development environment.

If you’re tuning a model, you can use the Windows AI Toolkit. If you’re building a prompt, you can use Prompt. Developers can use the Windows App SDK and any C++ or C++ tools they want, along with tools for their chosen orchestrator. Visual Studio Code lets you choose the extensions you need for your project, and designers can use the Microsoft Dev Box virtual machine or GitHub Codespaces to build and manage the right development environment with the right toolchain.

Prompt is an important part of providing a more mature approach to LLM application development. By testing and debugging the prompt outside of the code, documentation can be built in parallel with the application, which helps prompt engineers and application developers collaborate more effectively, much like front-end technologies like Figma support similar collaboration with designers on the web.
editor@itworld.co.kr

Source: www.itworld.co.kr