A Game-Changer with Local LLM Support
In a groundbreaking move, Opera’s developer branch has rolled out support for running Large Language Models (LLMs) locally. This marks a significant milestone in the realm of artificial intelligence and web browsing.
Opera, a renowned web browser company, has always been at the forefront of innovation. Its latest offering, the ability to download and use LLMs locally, is no exception. This feature is currently available to users of Opera One who receive developer stream updates. Opera integrates 150 local AI language models in developer branch.
A New Era of AI Browsing
The update introduces 150 different LLMs from 50 different LLM families, including LLaMA, Gemma, and Mixtral. Previously, Opera only offered support for its own LLM, Aria, a chatbot similar to Microsoft’s Copilot and OpenAI’s ChatGPT.
The key difference between Aria, Copilot, and similar AI chatbots is their dependency on a dedicated server via an internet connection. With the locally run LLMs added to Opera One Developer, data remains local to users’ PCs and doesn’t require an internet connection, except for the initial download of the LLM.
The Future of AI Browsing
Opera’s new local LLM feature opens up a world of possibilities. Imagine a future where your browser could rely on AI solutions based on your historic input while containing all of the data on your device. While privacy enthusiasts might appreciate the idea of their data being kept solely on their PCs, a browser-based LLM remembering quite that much might not be as attractive.
The Road Ahead
As of now, there is no timeline for when or how this feature will be introduced to the regular Opera browsers. Users should expect features launched in the AI Feature Drop Program to continue to evolve before they are introduced to the main browsers.
Despite these advancements, it’s important to note that going local will probably be “considerably slower” than using an online LLM. Storage might also be a concern for those wanting to try lots of LLMs. Opera states that each LLM requires between two and ten gigabytes of storage.
Despite these limitations, Opera One Developer branch is the first browser to offer a solution for running LLMs locally. It’s also one of the few solutions at all to bring LLMs locally to PCs, alongside Nvidia’s ChatWithRTX chatbot and a handful of other apps.
Here are some intriguing local Large Language Models (LLMs) that you might want to delve into:
Code Llama:
Code Llama is an interesting LLM that is an extension of Llama. It’s designed to generate and discuss code, aiming to boost efficiency for developers. Code Llama comes in three versions with 7, 13, and 34 billion parameters respectively. It supports a variety of popular programming languages such as Python, C++, Java, PHP, Typescript (JavaScript), C#, Bash, and more.
Code Llama offers several variations:
- Instruct: This variation is fine-tuned to generate helpful and safe answers in natural language.
- Python: This is a specialized variation of Code Llama that is further fine-tuned on 100 billion tokens of Python code.
- Code: This is the base model for code completion.
Phi-2 is another noteworthy LLM released by Microsoft Research. It’s a 2.7 billion parameter language model that showcases exceptional reasoning and language understanding capabilities. Phi-2 model is ideally suited for prompts using question-answering, chat, and code formats.
Mixtral is an LLM designed to excel at a broad spectrum of natural language processing tasks. These tasks include text generation, question answering, and language understanding. The key benefits of Mixtral are its performance, versatility, and accessibility.
In conclusion, Opera’s move to support local LLMs is a game-changer in the field of AI and web browsing. It not only enhances user experience but also paves the way for a future where AI and privacy can coexist seamlessly.