In this workshop, you will learn how to explore and compare over a hundred LLMs. Once you have found a LLM you found most useful for your use-case, you will learn how to deploy that LLM directly from LLM Labs to your application. This workshop will also instruct you on how to evaluate and fine-tune LLM’s prompt responses.
It will take user 30 minutes to finish this workshop and go through all of the steps necessary to understand how to use LLM Labs.
LLM Labs is currently free for users to create a workspace. The only cost involved with LLM Labs is the price of the inference call to the LLM.
The platform provides pricing estimations for each type of call to each model, so you’re well aware of the costs, each step of the way.
Your first few calls are on us. We’re offering new users the ability to make calls to any LLM free of charge.
If you are creating an app as a developer and utilizing LLMs, this is a workshop for you. You will be able to deploy your fine-tuned LLM directly from Datasaur to your application.
If you are data scientist exploring the utility of LLMs to label and/or evaluate your datasets, LLM Labs will be incredibly useful for this purpose.
This workshop is not advanced in terms of technical know-how. We will cover how to compare LLMs, how to evaluate LLMs, and deploy the best one for your business. Our interface is designed for non-technical users so hobbyists, developers, and data scientists alike will all be able to utilize and learn from this workshop.
What is a Vector DB? A Vector Store is a specialized database designed to store and manage vector embeddings of text data. Vector Store acts as your central hub for all LLM-related vector embeddings within LLM Labs. It offers:
Efficient Retrieval: It allows you to quickly retrieve relevant information based on similarity score, making it ideal for tasks like question answering and document search.
Knowledge Base: The Vector Store can be enhanced with additional information, turning vectors into rich knowledge representations.
Flexibility: You can easily create, update, and delete vectors, providing flexibility for various projects.
Model Management is designed to streamline the process of selecting, using, and deploying LLM models. With a diverse range of over 250 foundation models at your disposal, you can find the perfect fit for your specific use case without the need for extensive training.
Sandbox is a key feature within LLM Labs, providing a user-friendly environment specifically designed for LLM experimentation. It allows you to:
Connect your preferred LLM model: Integrate your choice of LLM models to explore their functionalities.
Create a dedicated sandbox: Set up a personalized workspace for your LLM experimentation.
Configure your sandbox (LLM Application): This configuration process defines your LLM Application. It involves defining elements like:
Prompt Template: Craft a template that specifies the format for user prompts sent to your LLM model. This ensures consistency and clarity in user interactions.
Context (Vector store): Optionally, integrate a context vector store to provide additional background information to the LLM model, potentially improving its understanding and response accuracy.
Configuration: Fine-tune various parameters for the connected LLM model, such as temperature or token settings, to optimize its performance for your specific use case.
Run prompts: Test your LLM models with various prompts to evaluate their responses and refine your approach.
Sandbox offers several advantages for LLM enthusiasts and developers:
Reduced Risk: Experiment with different models without committing to deployment, minimizing potential risks associated with real-world use cases.
Enhanced Understanding: Gain deeper insights into individual LLM models and their capabilities through hands-on experimentation.
Optimized Configuration: Fine-tune model parameters within the sandbox to achieve the best possible results for your specific needs.
Streamlined Development: Test and refine your LLM applications in a controlled environment before deployment, ensuring optimal performance.