• GPT Weekly
  • Posts
  • Making the Most of GPT: OpenAI's Tactics for Better Results

Making the Most of GPT: OpenAI's Tactics for Better Results

Computer Optimization, Bard improvements and more

Happy Monday!

This week we’ve got:

  • 🔥Google Deepmind’s sort solution, OpenAI best practice on GPT, and Bard improvements

  • 🗞️Apple’s use of Generative AI and other 9 AI news highlights and interesting reads

  • 🧑‍🎓Learning about tokenization and using Huggingface LLM with LangChain

Let’s get cracking.

🔥Top 3 AI news in the past week

1. Optimal solutions are inhuman

Sorting is one of the fundamental algorithms used on the internet everyday. Think of how companies like Netflix need to find correct movies from their huge content library and present it to you. More content is being generated everyday. So, there is a need for newer and more efficient algorithms.

Searching for these algorithms has been a human task. People coming up with efficient and optimal solutions. There has been no new improvements in the last 10 years. But last week, Google’s DeepMind came up with new algorithms for 3-item and 5-item sort.

Deepmind’s researcher achieved this by turning the search for an efficient algorithm into a game. Then they trained Alphadev to play this game. When playing this game, Alphadev came up with unseen strategies. These “strategies” are the new sorting algorithms.

The solution isn’t revolutionary as it doesn’t find a new approach. This solution works by optimizing the current approach.

The algorithms have been added to C++ library. The first time a completely AI solution has been added to the library.

This is an important discovery because it shows that finding the best optimal solutions needs computers. As computers are able to go beyond what humans can perceive. Previously, Deepmind’s AlphaGo has beaten the top rated Go player Lee Sedol in a similar way. It came up with moves which were never seen before.

On the other hand, computers might be restricted to what they have been taught. Someone was able to replicate the discovery using ChatGPT.

2. GPT Best Practices

Now we have a list of tactics and strategies straight from Open AI to get better results.

I have looked through the strategies and tactics and most of it is around providing better inputs. “Prompt Engineering”, if you may. Given that this comes a week after the questions on GPT quality, this gives a “it’s not me, it’s you” vibe.

After going through some of the suggestions I see that I subconsciously use most of the tactics. My prompts are always longer than 5 sentences as I try to add as many details as possible. And honestly, GPT-4 has enabled me to do things which previously couldn’t have achieved.

3. Logic and reasoning improvements in Bard

Bard, on the other hand, has been lacking. Google is trying to improve the responses by adding features one at a time.

Last week it was announced that Bard will get better at logic and reason. This is achieved using “implicit code execution”. Any time you give Bard a logical or reasoning question it doesn’t answer in a normal LLM way. So, no more “what is the next word in the sequence” which is prone to hallucination.

Instead Bard will now recognize that the prompt is a logical question. It will then write and execute code under the hood. It’ll respond to the question by taking the output of the execute code.

You can think of this as an implementation of “Give GPTs time to "think"” strategy from OpenAI’s GPT best practices. As per Google, this improves the performance by 30%.

Give it a try and let me know?

🗞️10 AI news highlights and interesting reads

  1. Apple did not showcase any generative AI products during the WWDC. Though they are introducing the “what is the next word in the sequence” logic of LLM into autocorrect. It can be summed thusly:

  1. ChatGPT cannot read the name - davidjdl. Some think that this is due to tokenization of Reddit data. In the learning resources section I have added a tutorial on tokenization.

  1. Browser extensions are a security nightmare. The GPT and LLM craze has given the malware extensions another way to steal user data. Beware of the summarization and “write for me” extensions.

  1. Most of the AI generated imagery is going to be used for stock photography. But is the industry dying? Here’s a look at the data so far. The author’s finding is that AI stock images often don’t have people in it. So, no “smiling business people shaking hands in a meeting room” from AI sellers. This might change with MidJourney V5. Future is still unknown.

  1. Six tips for better coding with ChatGPT. I have been using “Trust, but verify” mental model quite frequently. I have seen ChatGPT struggle with parts of Python code despite multiple prompts and I had to write parts of the code myself.

  1. AI startups might be too easy to copy. And with AI requiring lesser resources, we might even see 1 person companies worth more than 1 million dollars.

🧑‍🎓3 Learning Resources

  1. If you are looking to build better solutions using GPT then understanding tokenizers is a must:

    1. https://simonwillison.net/2023/Jun/8/gpt-tokenizers/

    2. https://matt-rickard.com/the-problem-with-tokenization-in-llms

  2. Using Flowise and HuggingFace LLM and Langchain

    https://cobusgreyling.medium.com/how-to-use-huggingface-llms-with-langchain-flowise-2b2d0f639f03

That’s it folks. Thank you for reading and have a great week ahead.