Wingman Introduction
Wingman is a revolutionary chatbot that allows you to run Large Language Models (LLMs) locally on your PC or Mac (Intel or Apple Silicon). With the first beta release, Rooster, now available, Wingman offers an intuitive graphical interface that eliminates the need for code or terminals, making it accessible for anyone to run LLMs with just a few clicks.
Wingman Features
Run LLMs Locally
Wingman enables you to run large language models like Llama 2, Phi, OpenAI, Mistral, Yi, and Zephyr right on your local machine. This means no more reliance on cloud-based services and the ability to work offline.
Easy-to-Use UI
No more code or terminals. Wingman's user-friendly interface makes running LLMs approachable for everyone. Just point, click, and start conversing.
Access to Multiple Models
Enjoy access to a wide range of cutting-edge language models within Wingman's chatbot interface. Stay updated with the latest models from Hugging Face's model hub without leaving the app.
Compatibility Check
Wingman evaluates compatibility upfront, so you know which models you can realistically run based on your system's specs, avoiding crashes or slow performance.
Customizable System Prompts
Get more out of LLMs by customizing system prompts and creating templates for different use cases. Interact with models as characters or with specific viewpoints.
Wingman Compatibility
Wingman supports Windows PCs and MacOS. On PC, it works with Nvidia GPUs or CPU-based inference, and on MacOS, it supports both Intel and Apple Silicon devices.
Wingman Security
Wingman runs entirely on your machine, ensuring your data stays private. There's no need to share your secrets with OpenAI, Google, or any other third-party services.
Wingman Faqs
How much does Wingman cost?
Wingman is open-source and completely free. You can download it from the GitHub repository and try it out yourself.
What are the system requirements?
Wingman is compatible with Windows PCs and MacOS. For PCs, it supports Nvidia GPUs or CPU-based inference, and for MacOS, it supports both Intel and Apple Silicon devices.
How do I use Wingman?
To use Wingman, simply download the app, run the installer, and you're ready to go.
Is Wingman secure?
Yes, Wingman runs locally on your machine and does not share your data with any external services. It doesn't require an internet connection to run, except for initially downloading the models.
How can I contribute to Wingman?
Wingman is an open-source project. You can contribute by visiting the GitHub repository to report issues, submit pull requests, or suggest new features.
How often are updates released?
You can expect frequent updates for Wingman. The developer is actively working on new features, bug fixes, and improvements.
Can Wingman be used offline?
Yes, Wingman can run fully offline once you've downloaded the necessary models.
Is there an API available?
An API is in development and will be available for public release in the future.
Does Wingman support multi-modal prompting?
Currently, Wingman does not support multi-modal prompting, but it is being tested internally for future implementation.
Wingman's First Beta Release
Wingman's first beta release, Rooster, is now available for you to try out. Experience the power of running LLMs locally with an easy-to-use interface and the freedom to customize your interactions.