AI - creating your own chatGPT assistant in minutes

This week I am going to take a look at the other method for creating your own chatGPT. This post is aimed at anyone interested in AI and chatGPT.

Last time we covered how you can use the chatGPT UI to create your own GPT. I even did a little video screen capture where it demonstrates that it does take about 3 minutes to produce your own GPT.

This is very similar in the sense that it does not take very long but it is very powerful. One of the great things about computers and programs is that they can do the same task over and over. As long as there is a power supply, you can have a computer running 24/7. It never gets bored. This is why we can now calculate Pi to billions of digits, this is impossible for a human to do but it is a doddle for machines.

While computer programs have become more sophisticated, number crunching is what they still do a lot of the time, as users we just don't realise it. With the rise of the AI it can seem that the machine is more human than computer, but the truth is that it can still do all the things a computer can do. The reason I am mentioning this is because when we create our own GPT, we are getting it to do a series of steps repeatedly that we the user have defined. Like any program the better we understand what we are trying to build the more likely we will be successful.

This version of your own GPT is not a great deal different from the first version we covered in a previous post, although the interface is different and it does involve a little more work. But that extra work also gives us extra flexibility, which is the payoff we get and it is some payoff.

Even if you are new to this area, chances are you have come across chatGPT and you may know that they have cut a deal with Microsoft and chatGPT is currently being integrated into all things Microsoft. Including Azure. Microsoft was already working in the AI arena and had come up with Bing and some other offerings, but OpenAI just blew them away so MS made a deal.

I am telling you this because what we are going to talk about now can also be done through your Microsoft Azure. The process is slightly different but it does have similarities and the result is the same. What you will learn today can be used in multiple areas.

When you log in to Open AI, you have two links, chatGPT or API. If you click on the API you are presented with the most wonderful array of options demonstrating the capabilities of chatGPT and the other AI tools from OpenAI. This includes documentation with examples, for a developer you have to spend some time here. For non-techies and those wanting to have a potter, on the left-hand side there is something called the "playground" and something else called "assistants". The playground is where you get to test the assistants you build.

When you click on the Assistant icon you get a list of your assistants, there is also the option to create a new assistant. When you create a new assistant you give it a name and some instructions. The instructions are the key here. The better quality the instructions, the better your assistant will be. Chances are you will not get the instructions right the first time. Be ready and willing to tweak the instructions until it does what you want. Although the text area is small this does not necessarily mean that your instructions need to be small. You write enough clear instructions to get the assistant to be as helpful as possible.

Writing the instructions takes practice and you will get better with time. By writing your prompts into a text file you can reuse them to see how your assistant is improving over time. Occasionally your prompt will get worse when you tweak it. For developers, you can use the ideas of version control and testing on prompts much in the same way as you do with code.

I'm banging on a bit about instructions here this is because it is something we do badly at first and can lead to an initially negative experience when using assistants, so much so, that it is tempting to think the LLM is not up to the job yet. This may indeed be the case, LLM are not miracle workers, but then neither are humans when they are not given the right instructions.

One of the easiest ways to get your head around an assistant is to think of it like a trained person. You can ask anyone how to make a coffee, and most people can make coffee, this will be very different if you ask a barista to make a coffee. An assistant, just like in real life, is someone who has been specifically trained in a task and so when questioned, they do so from the point of view of their training. If you ask someone to book you a table at a restaurant anyone can do that. But if you have an assistant who also knows that you have very strict dietary requirements then they will take this into account when booking the table.

Assistants are great for learning what a LLM is capable of. They don't require any coding, although they can be used in your code. You can call assistants from within your code, this opens up a tremendous amount of possibilities, though that is for another day.

To finish, let's just summarise the steps for creating an assistant

  1. log in to openai.com

  2. choose API (https://platform.openai.com/apps)

  3. choose Assistants (left hand menu, robot like icon)

  4. create a new assistant, complete name and instruction fields

  5. choose Playground from left-hand menu icons

  6. choose the assistant you have just created from drop-down

  7. test assistant

  8. make changes to instructions as required

Note that you can also change the model you use. We will leave the additional functionality for another day. As you can see with a small amount of work we have a tailor-made assistant. Once again this has taken only a few minutes. We can use this assistant within our code if required and I will cover all of this in a later post.

That is it for today. Developers, have a go at creating an assistant and compare the replies you get from a query with that from the same vanilla chatGPT. I would think that in a commercial application, the instructions would be provided by a prompt engineer. By this I mean someone who can take business requirements and craft them into instructions and prompts. I will be writing about this soon because this is a skill that is going to be more in demand over the coming years and those with strong language skills will become sought after.

Bryan
ps, this is written by a human, me, for better or worse, and took about 1 hour. The AI-generated summary took about 20 seconds. Makes you think.