5 completely free ChatGPT analogues for Mac and PC. Everyone works without the Internet

We have limited access to Western neural networks, like ChatGPT. Well, let’s download the analogue ourselves and use it from our own computer.

The progress in the development of neural networks, especially those that are open and separated from the world of commerce, cannot be stopped. Why pay someone or give your correspondence on the Internet to someone unknown, if you can have your own neural network chatbot?

Previously, I talked about LM Studio and the opportunities that this program opens to almost all Mac and PC owners. In the comments you asked for a little more examples of neuro-bots – that’s what I’ll do in this article.

What is LM Studio, briefly

LM Studio is a free, open-source application that runs neural network chatbots directly on your computer. It was even mentioned by Apple in its presentation of Macs with M4 chips.

In LM Studio, all actions, correspondence and other data are stored only on the user’s device. Doesn’t go online Nothing. This will be 100% your own AI assistant, for which you will not have to pay – unlike ChatGPT and many other online neural networks and chatbots.

You simply download the LLM model (a file with an already trained chatbot neural network), launch it in LM Studio and use it as you please. More precisely, as much as the power of your computer allows. There is a Russian interface, and some models even support the Russian language.

What is needed for LM Studio

To get started:

1. We go to the site LM Studio (may not open from Russia, I’ll post a version for Mac to our Telegram channel).

2. Download the program for your OS.

3. Install, open.

You can run LM Studio and install the simplest AI chatbot even on a basic Mac with an M1 chip and 8 GB of RAM. However, for optimal speed and better memory of correspondence history, it is recommended to use 16 GB RAM models and up to… infinity.

The situation with PCs is different due to the fact that their owners are much less limited in RAM capacity. Here, comfortable use of truly useful bots will require either a video card from Nvidia with at least 8 GB of its own memory, or as much as possible of the most common RAM. Truly, LM Studio is capable of loading at least 100, at least 500 GB of RAM – it all depends only on the degree of advancement and demands of a particular “neural network”.

How to install the model: in LM Studio click on the icon Discover (magnifying glass in the left corner of the screen) and select any of the popular options offered. You can also paste the address of any other LLM model from the site Hugginface into the bootloader search bar.

Let’s look at 5 good and widely useful models that will help solve various problems on Macs and PCs with basic and medium-power configurations.

1. phi-4

New from Microsoft, released in December 2024. Created through collaboration with OpenAI and ChatGPT developers. Its goal is to achieve the highest possible quality of answers with minimal model weight.

Strengths: Mathematical problems and calculations in general. Commercial use permitted. Requires at least 12 GB of RAM.

This model is on Hugginface

2. Qwen2.5 Coder

A popular model for programmers, designed for generating, analyzing and debugging code. One of the best among open LLMs, she is constantly praised and recommended on forums.

It is also good because it has a huge number of modifications: the simplest one, Qwen2.5 Coder 3B, will run even on a PC with 6 GB of RAM. And the advanced one, Qwen2.5 Coder 32B, will require about 24 GB of RAM.

This model is on Hugginface

3. Llama 3.2 3B Instruct 4bit

One of the smallest models. Capable of launching and running quickly on almost any PC and absolutely any Mac with an M1 processor or newer.

Suitable for asking general questions and giving her simple tasks. Ideal as an initial model to familiarize yourself with the capabilities (and typical limitations) of local LLM chatbots. Requires only 4 GB of RAM.

This model is on Hugginface

4. NemoMix Unleashed 12B или ArliAI RPMax 12B

Two extremely effective “game” models for a wide range of requests. Suitable for RP, communication on broad topics, answering questions and just entertainment. Both have a huge reserve of maximum memory for the history of correspondence within the chat: more than 100 thousand tokens each.

Effective operation of both models will require at least 12 GB of video memory or RAM, and when setting the memory capacity above 12 thousand tokens for correspondence, hardware no weaker than the RTX 3080/M4 Pro is recommended.

NemoMix Unleashed 12B на Hugginface
ArliAI RPMax 12B на Hugginface

5. NeuralDaredevil 8B Abliterated

One of the fastest among the wide range of models. The speed and quality of generating responses within the limit of 4 thousand tokens will give odds to most of the above models. Recommended for general format answers.

Works well with requests and responses in Russian. Works well on systems with 12 GB of RAM or more.

This model is on Hugginface

P.S. There are already more than two hundred LLM models, and it is impossible to test everything properly. If you’re already using LM Studio, please recommend your favorite models in the comments.







Source: www.iphones.ru