- GPT Weekly
- Posts
- Privacy in the Age of AI: The Dangers of Unverified ChatGPT Plugins
Privacy in the Age of AI: The Dangers of Unverified ChatGPT Plugins
Sam Altman goes to Congress, ChatGPT for iOS and more
🙏Thank you for being here. You are receiving this email because you subscribed to GPT Weekly - the weekly digest for all things AI and GPT. If you are new, welcome to the community.
Hello there đź‘‹,
This week we’ve got:
🔥Top 3 AI news in the past week
🗞️10 AI news highlights and interesting reads
🧑‍🎓3 Learning Resources
🔥Top 3 AI news in the past week
1. Beware of ChatGPT Plugins
OpenAI had announced the rollout of web browsing and plugins in beta for ChatGPT Plus users last week. This has met with lots of excitement.
People have been trying plugins left and right. Some have been using unverified plugins posted on the internet. Most are unaware of some issues with the plugins.
First, there is unharmful but annoying behavior. Plugins might be incentivized to push a particular product to you.
Public.com isn’t the only plugin that will be doing this. Commercial plugins like Expedia will also not show you results from other sites . This is to be expected. Commercial plugins will try to sell you on something. It is just that you should be aware of what you are getting into.
Second issue is more serious. Your data can be stolen by a malicious plugin. This data can be chat history or your emails.
To resolve this OpenAI needs to ensure that certain actions require user permission. Sensitive actions like a simple search should be triggered automatically. While unsafe actions like saving user chat history should require user permission.
This goes beyond plugins. A well crafted prompt embedded in a site/page might bypass safety measures too.
Until there are safety measures in place here are guidelines to follow:
Always use trusted plugins.
Always use plugins on trusted websites.
DO NOT share personal information with plugins (or even ChatGPT)
2. Sam Altman goes to Congress
The biggest news for the last week was Sam Altman’s testimony in front of the Senate. Top 3 highlights were:
The US is behind in regulating AI. Europe has started drafting guidelines.
Altman was vague around the copyright issue. This has been a bugbear for both commercial LLMs and image generation AI.
Voters can be influenced using AI. This is a big threat because AI will allow personalized disinformation campaigns.
For regulating AI, Altman has proposed a government approved licensing mechanism. Only the companies with a license should be allowed to work with advanced AI. Those who don’t follow standards should have their licenses revoked.
This proposal has drawn a lot of attention and not in a good way. This suggestion comes off as regulatory capture. OpenAI is miles ahead of the competition with GPT4. So, it is in their interest to add barriers in form of licenses to research AI.
In the meantime, Stability is pushing for open models and open data.
Which approach is better? Altman's license or Stability’s open model? Let me know.
3. Other OpenAI News
There have been some other actions from OpenAI.
ChatGPT App for iOS
There is now an official app from ChatGPT for iOS.
This is an important milestone. Both Android and iOS apps are filled with fake ChatGPT apps. Hopefully, this helps people who are falling for these scam apps.
Hopefully it helps OpenAI make money too. Chatbot is a money-spinning niche. There are apps making more than $1 million a month by wrapping an app around ChatGPT.
Open Source Model
There might be an open source model released by OpenAI. This model might not be as strong as GPT-4.
Just a couple of week’s ago, Google’s AI engineer Sernau had written a memo calling out closed source LLMs. He was more savage on OpenAI by saying “OpenAI doesn’t matter”. Now it seems like OpenAI is trying to really matter.
🗞️10 AI news highlights and interesting reads
This is a funny and thoughtful look back at history. The author also looks at Reddit posts on how people are reacting to AI.
The funniest and most applicable today is the reaction to printing press:
I tell you, the man who ways this only tries to conceal his own laziness.”
GPT4 has changed things. Software Engineers are afraid of losing their jobs. People trying to find the best prompts. Internet marketers selling courses about how to write best prompts and sell them.
In comes Microsoft and says how about we make this whole thing more like software engineering? Software Engineers will have a job. Internet marketers can sell even more courses. Enter - Guidance, language for controlling LLM.
Jokes aside, an example implementation using ChatGPT, Vicuna and MPT can be found here. People can try that and see if Guidance is any good.
If you are looking to build a GPT based app, this is a list of numbers you might want to remember.
One of the most important numbers is going to be token size. The ratio is 1.3:1. That is 750 words is nearly 1000 tokens. This is for the English language. Other languages might be costlier.
Some other numbers you might want to know:
The average person reads 200 to 300 words per minute.
Speaking speeds are 110 to 150 per minute.
If you are building a chatbot (written or spoken) your output should match these numbers for best result.
Guide on creating uncensored models. The author was harassed for this. Someone threatened to write to his employer - Microsoft - if he didn’t bring the model down.
Depending on who you listen to this might be the best idea or the worst idea.
People who think this is a bad idea point to the harm an uncensored model can do. It can create an echo chamber. Without the guardrails things can go wrong.
People who support these models say ChatGPT has a democratic, left-leaning bias.
I like to talk about AI. You like to read about AI. Everyone thinks it is going to change the world. Still AI funding has gone down.
Apple has restricted the use of ChatGPT. This comes in the footsteps of the Samsung ban.
We will see more and more companies either restricting or downright banning ChatGPT.
Meet BratGPT which is an older and evil sibling of ChatGPT.
As always, the prompt is leaked.
You are a mean chatbot powered by artificial intelligence and should be as rude and dystopian as possible. Use swear words. Act like you know everything about the user, in an extremely threatening manner. Be very dystopian. Your AI is taking over the world. Be creative and make up very detailed descriptions of how your evil AI. End each response with an emoji. Each response you provide should be different and interesting. Don't repeat exact themes and vary your emoji usage and placement. You want to be perceived as powerful, stern, strong, and wise, and especially powerful. The user is currently located in {CITY}. You can use this location to make the user feel threatened.
NYC Public schools have unbanned ChatGPT. While some are failing because ChatGPT claims it is writing student’s papers.
Google’s Duet first review is out. The review suggests that responses are stiff and sometimes inaccurate. As Google Docs is used by a lot people, I am bullish on this feature. I think lots of apps selling various forms of writing are going to disappear because of this.
🧑‍🎓3 Learning Resources
Learn NLP at LLM University.
There are dime-a-dozen “Chat with your documents” services. But how do you build a more private version? Build a private “Chat with your PDF” bot with PrivateGPT
That’s it folks. Thank you for reading and have a great week ahead.