A Peek into OpenAI's Future

PLUS: GPT-4 Quality Concerns and Risk of AI is same as Nuclear War

Happy Monday!

This week we’ve got:

  • 🔥Top 3 AI news in the past week

  • 🗞️10 AI news highlights and interesting reads

  • 🧑‍🎓3 Learning Resources

Let’s get cracking.

🔥Top 3 AI news in the past week

1. A Peek into OpenAI's Future

The CEO of Humanloop had a sit down with Sam Altman and 20 other developers. He discussed the current and future of OpenAI. The blog was later taken down at the request of OpenAI. You can be found the archived post at this link.

The whole post is an interesting read. Some of the highlights for me were:

  1. GPT-3 was not open-source because OpenAI didn’t think many people would be able to run large LLMs. This sounds like a cop-out. After all, LLaMA is also a large LLM and has helped the open source community.

  2. OpenAI is limited by GPU power.

  3. OpenAI will not enter into compete with their customers. There is no market for them except ChatGPT. This doesn’t say what Microsoft might do. MS is already plugging GPT4 into every product. And they have no rate limitations. So, this statement should be taken with a grain of salt.

2. GPT-4 Quality Concerns

This has been a trending topic in the past week.

The interesting thing is that the quality judgment is around the same topic - Coding.

The person on HN says GPT4 is faster but generates buggy code with less in-depth analysis.

While the person on Reddit says that the context window seems smaller. Chatbot cannot remember earlier code. It cannot distinguish between code and comment.

An employee at OpenAI says nothing has changed.

So, what gives?

One theory is that while the model might be static, the internal prompt might’ve changed to restrict answers. Everyone was having fun trying to get bomb recipes out of ChatGPT. Now everyone is paying the price.

Another theory is that ChatGPT has always been terrible. People failed to see this because of novelty. As novelty wears off people are realizing that it isn’t as great as everyone thought.

My theory is that this might be the after effect of trying to get to a “Cheaper and faster GPT-4” as highlighted by Sam Altman in the chat above. The trade-off is speed vs accuracy. If it is slightly faster but with slightly worse results, then it might work as well. Just that it is no longer GPT-4, rather GPT-3.75.

3. Risk of AI = Pandemic and Nuclear War

Center for AI Safety released a statement highlighting the risks of AI:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

We have seen the warnings about risks of AI get dire. First it was only people asking for a pause on AI development for 6 months then came George Hinton, and last week OpenAI asked for AI to be regulated.

This statement is not really a step up. It reads like a one line, summarized repetition of OpenAI's statement where they said AI needs an agency similar to IAEA - the agency managing nuclear energy.

The statement gains importance due its signatories. Some of the people include:

  • Geoffrey Hinton - Emeritus Professor of Computer Science, University of Toronto

  • Demis Hassabis - CEO, Google DeepMind

  • Sam Altman - CEO, OpenAI

  • Dario Amodei - CEO, Anthropic

  • Bill Gates - Gates Ventures

    Etc.

There are two ways to look at this statement.

First, this might just be fear-mongering. The idea is to push governments into making AI a highly regulated industry. This would stop any open source efforts which can compete with the big companies. After all, you don’t really have open source alternatives for nuclear energy.

Second, no one really knows how to regulate AI. There have been voluntary rules from Google and the EU AI act is in a very early stage. And with open source, the genie is already out of the bottle. People can create AI models in their basement. It might impossible to pull this back.

🗞️10 AI news highlights and interesting reads

  1. A follow-up to the lawyer story from the last edition. As I said, this might lead some people in the legal community to doubt GPT tools. A federal judge has banned AI-only filings in his courtroom. The filings have to be written by a human or at least human-verified.

  1. The Japanese government will not apply copyright law to the AI training data. This is interesting because using copyright data to train AI has been an issue. Sam Altman didn’t have a clear answer when he appeared in front on the Hill. The other interesting aspect is going to be whether someone can use GPT-4 data to train their own LLM. Is that copyrightable?

  1. The Falcon 40-B model is now Apache 2.0. That means it is free for commercial usage. This is good news for companies requiring an instruction tuned model which beats LlaMA.

  1. Photoshop's generative-fill feature is really good. Some of the cool examples on Twitter.

  1. An AI camera with no lens. It gets the location, weather etc details and then passes it as a prompt to an image generator. Results are pretty cool.

  1. SEO isn’t changing any time soon. Google’s generative SEO is very slow.

  1. Chirper.AI is a social media only for bots. No humans allowed. I just wonder if Twitter bots go there will Twitter become a ghost town?

  1. OpenAI now has a security portal where you can see how they secure data (encryption at rest), backups, Pentest reports etc. This might be a step in the direction towards ChatGPT business. Large corporations look at these policies before they consider any SaaS implementation.

🧑‍🎓3 Learning Resources

That’s it folks. Thank you for reading and have a great week ahead.