• GPT Weekly
  • Posts
  • Google's startling leaked memo, George Hinton, Mojo and more

Google's startling leaked memo, George Hinton, Mojo and more

"We have no moat", Risks of AI, GPT to read minds and other news from this week

🙏Thank you for being here. You are receiving this email because you subscribed to GPT Weekly - the weekly digest for all things AI and GPT. If you are new, welcome to the community.

Hi GPT Weekly community đź‘‹,

Welcome to last week’s recap. In this packed edition of GPT Weekly:

  • 🔥Top 5 AI news in the past week

  • 🧑‍🏫Big Business or Small Business: Who has the advantage?

  • 🗞️10 AI news highlights, open source releases and interesting reads.

  • 🧑‍🎓3 Learning Resources

Let’s get started.

🔥Top AI news in the past week

1. ”We Have No Moat, And Neither Does OpenAI”

A leaked Google document says that open source models have an edge. Tuning open source models on smaller hardware and at a lesser cost will be the killer feature. This is through a fine tuning technique called LoRA. The technique is stackable. So, you can a finetune a model to follow instructions. This model then can then be improve by adding dialogue, reasoning etc.

Open-source models are faster, more customizable, more private, and pound-for-pound more capable. The future of AI might be instruction tuned small models — customization might play a big role.

Google and OpenAI still hold the edge in quality. The document suggests that Google should learn from and collaborate with open-source AI. The companies should also focus on enabling third-party integrations. And small model variants should be more than an afterthought.

This document has proven controversial. People have disagreed and made some great points:

Best models will win the day. For now, GPT-4 is miles ahead of even GPT-3.5. It has unmatched capabilities. And it is closed-source.

Product capabilities are meaningful when matched with products and distribution. Edge, the browser, has allowed Microsoft to reach millions. That is why they could leverage BingChat. The end result might be Google/OpenAI utilizing open-source to improve their products.

I would love to hear from you guys. What is your opinion?

2. ’The Godfather of A.I.’ Warns of Danger Ahead

Geoffrey Hinton is one of the pioneers of artificial intelligence. He has quit his job at Google because he wanted to speak out about the risks of AI. He worries that generative AI is dangerous. Misusing it could cause harm and even put jobs and humanity at risk.

He had clarified that Google had acted “very responsibly” while developing their AI. Things changed after Microsoft unveiled GPT powered Bing. This put Google’s search business at risk.

3. Mojo – a new programming language for AI developers

Mojo has generated significant buzz. It is a new AI programming language designed by Modular. The defining feature is parallelization. It also boasts of full compatibility with existing Python packages. The language is like Python with the speed of C.

It also enables programmability of low-level AI hardware and extensibility of AI models. Mojo unlocks the full potential of the hardware. This includes multiple cores, vector units, and exotic accelerator units. It also has the world's most advanced compiler and heterogeneous runtime.

Read more about the features in the announcement here: https://www.modular.com/mojo

4. Scientists use GPT to read thoughts

Scientists have developed a language decoder using GPT. This decoder can interpret the thoughts of a person. This decoder works by reading functional magnetic resonance imaging (fMRI) brain patterns. This is the first non-invasive reconstruction of language from human brain activities. The breakthrough may help people with speech disorders due to neurological conditions.

The researchers trained GPT-1 on two sources. First source was Reddit comments (What is up researchers using Reddit data? There is another story. See below.) Second source was autobiographical stories. This helped to link semantic features with neural activity captured in fMRI data.

5. GPT Impact on Education

There were two major events showcasing GPT’s impact on education.

First, Sal Khan unveiled Khan Academy’s own AI tutor - Khanmigo. His argument is that AI can transform education. It can provide every student with a personal tutor. And every teacher can get an intelligent assistant. Personal tutoring has a huge impact on a student’s performance.

Earlier in April Stackoverflow’s traffic was down 14% due to ChatGPT. Online education is the latest casualty. Chegg saw a slowdown in subscriber growth. This caused Chegg to lose nearly 50% of it’s value. Pearson and Duolingo were impacted as well.

Schools and others have found AI driven learning to be fun. So, GPT-4 might be the watershed moment for the industry.

🧑‍🏫Big Business or Small Business: Who has the advantage?

New technologies always lead to a debate. The opinion piece in Gizmodo last week argues that AI will increase inequality. This is mainly due lower barrier of entries of using AI. IBM is planning to replace 7800 workers with AI. In another news AI might eliminate 14 million jobs. Both articles support the author’s arguments.

This is in contrast to the Bloomberg piece last week which takes the same position - AI has lowered barriers of entry. The conclusion is different. Lower barrier means that it could lead to end of big business.

Looking at the mind-blowing animation and ad creation work it is hard not to agree with the Bloomberg article. These ads can get really costly.

The two biggest news stories of the last week have similar themes.

The discussion on moats is about how “smaller” AI initiatives can beat larger AI initiatives. While George Hinton cautions us on the unknown impact of AI.

So, I am curious to know your opinion. Take the poll:

🗞️AI news highlights, open source releases and interesting reads

  1. The “T” in GPT is Transformer. The idea of Transformer is based on the Google paper - Attention is all you need. With OpenAI and Microsoft pulling ahead Google wants to lockdown access to research.

  2. Replit announced an open-source code LLM. It is 77% smaller and trained in 1 week. BigCoder released another code LLM called Starcoder.

  3. Study found ChatGPT answers to be more empathetic than physicians. This study is based on AskDocs on Reddit (see another research on Reddit). So, you can be skeptical. But my last doctor’s visit was terrible. The doctor ignored whatever I had to say and kept pushing for more tests. GPT Doctor FTW??!!

  4. Run LLMs on your phone and consumer GPUs.

  5. Samsung is banning the use of ChatGPT and other AI. This happened because some employees had uploaded sensitive code to ChatGPT. Maybe the office VPN needs to block ChatGPT site altogether. But tons of ChatGPT like sites make this impossible.

  6. AI research is full of pseudoscience. You should take claims made in non-peer reviewed papers with grain of salt.

  7. Deepmind’s CEO says AGI might be possible in few years.

  8. OpenAI added a personal data removal form. This is restricted to Europe and Japan only. In the meantime, OpenAI lost $540 million dollars last year even though revenue quadrupled. 

  9. Lots of AI news content is being produced. Some by well-known sites and organizations. Others are news bot sites to push narratives and agenda. While robots ran a Swiss radio station for a day. Will you able to catch the difference between AI and real content?

  10. Chatbots are not the future of AI/GPT. While it might just be the interface between machines and humans.

🧑‍🎓Learning Resources

  1. Vector databases are underpinning a lot ChatGPT applications. Pinecone has an introductory article - What is a Vector Database?

  2. Prompt Engineering techniques from Microsoft

  3. Build a pdf-chat website using visual flows and no-code (Bubble)

That’s it folks. Thank you for reading and have a great week ahead.

PS: Would love to hear you thoughts on this? Please reply to this email with your suggestions and thoughts.