Affiliate Disclosure
Readers like you help support AWG. If you buy through our links, we may get a commission at no cost to you. Read our affiliate policy.
ChatGPT on steroids? What is AutoGPT?

Over the past six months, the world has actually choked on the possibilities of generative artificial intelligence, which is able to answer any question. Well, maybe not to each and not to answer, but to generate a string of words that “considers” as the desired content by man. Nevertheless, the ChatGPT, GPT-4 and the new Bing and Bard models delight millions of people every day.
We entrust generative artificial intelligence with a lot, from answering unusual questions about history and science to preparing longer content – if not just entire projects. However, most approaches to harnessing generative AI to write longer texts do not end with typing one prompt, but a whole series of them, and correcting and directing the model to what exactly you need.
And let’s not forget that our OpenAI favourites are limited by content moderation and lack of “connection” to the Internet. The remedy for all these ills is AutoGPT.
What does AutoGPT mean?
Very many people call AutoGPT “ChatGPT on steroids” – is this true? Yes and no.
No, because contrary to what its name suggests, AutoGPT is not a creation of OpenAI. It is not even an artificial intelligence model, but a tool using the OpenAI Model API.
AutoGPT was created at the end of March this year by Toran Bruce Richards, a game developer and founder of the software company Significant Gravitas. As the developer describes on his Twitter profile, AutoGPT is “an open source project that is an experiment towards making GPT-4 a fully autonomous model of artificial intelligence.”
Richards’ AutoGPT was inspired by the release of the GPT-4 model, which shortly after its official release became known as the best OpenAI artificial intelligence, demonstrating extraordinary capabilities in both understanding and generating text. The programmer’s project wanted to check how far he can “push” the automation of work with artificial intelligence.
How does AutoGPT work and how does it differ from ChatGPT and other models?
To put it simply, AutoGPT is an application written in Python, which uses OpenAI language models to operate, thanks to the API, which the company makes available for a fee to all interested people. What distinguishes the AutoGPT application from ordinary artificial intelligence models is the fact that it automates processes that we would have to do ourselves using “clean” models.
For example, when commissioning ChatGPT or GPT-4 to create a blog, it will describe the steps you need to take to achieve it (choose a topic, name, choose a platform, etc.). Artificial intelligence itself will not do anything for us from one prompt and we have to pull it by the tongue in the following issues: “Suggest the name of a blog about …”, “Write a two-column blog layout in HTML”, “Create a content plan” – after entering many prompts, correcting them and glueing the answers into one coherent whole, we would get a “plan” for the blog.
AutoGPT automates all of these processes by breaking down the request to “create a blog about… on the platform…” for minor tasks, which he will then do himself. From the name itself to the suggested content plan, to generating the website code and suggesting the best hosting right now – because AutoGPT has an internet connection.
How to use AutoGPT and is it free?
Although our OpenAI favourites are extremely intuitive and easy to use, the same cannot be said for AutoGPT. To use the application, download it from the GitHub repository (free), then download one of the environments in which AutoGPT will work: Docker, Python version 3.10 (or later) or Microsoft Visual Studio Code with the Dev Containers plugin (all three options are free), and then follow the instructions prepared by the application developer.
The stairs start when preparing AutoGPT for operation because it requires API access to one of the OpenAI models – ChatGPT or GPT-4 (GPT-4 is preferred). Access to the API is simple but paid, and it is difficult to estimate how much it will actually cost to use AutoGPT because it works in a system of “credits” – redeemable tokens that are “eaten” every time AutoGPT uses the OpenAI artificial intelligence model to generate text. The more text it generates, the more it will hit your wallet.
According to OpenAI on its website, a thousand tokens correspond to about 750 words. The price of one thousand tokens depends on the model and varies between $0.002 and $0.12
The open-source nature of the project allowed other developers to simplify AutoGPT a bit, and specifically to create their own “versions” of AutoGPT that work in the browser and offer almost the same as the original version. Two projects that draw on Richards’ work are the open-source AgentGPT and god mode. Although using them does not require the installation of the program, they are still not free solutions, because they require access to the OpenAI API, which – again – is paid.
AutoGPT in a broader perspective
AutoGPT (and its simpler counterparts) is from the user’s perspective a real competition for ChatGPT and GPT-4 because they add another degree of automation to the standard work with models and reduce the time needed to perform the same task – or at least allow you to move away from the computer when it generates – for example – a presentation scenario.
But is AutoGPT a competition for OpenAI? On the contrary, the existence of this type of tool is even desirable by the company, because it gives it another source of income (through API, which is made available only for this purpose: to build new applications that generate revenue), and at the same time suggests the idea of another use of large language models. In addition, the project is open source, which means that everyone – including OpenAI – has an insight into how it works.
At the same time, it is worth noting that the performance of AutoGPT is completely dependent on the language model to which we “plug” it because ChatGPT (GPT-3.5) and GPT-3 are much less understanding than GPT-4. In addition, it is not free from the typical language problems of AI models: hallucinations, linguistic and factual errors. And for correcting every mistake you have to pay, because in the case of using the API (as opposed to the completely free ChatGPT or GPT-4 in the ChatGPT+ subscription), each word has its value, and it can cost additional credits.

Alpha Web Guide Comment Policy
We welcome relevant and respectful comments. Off-topic or abusive comments will be removed. Please read our Comment Policy before commenting. Typo? Please let us know.