APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT

Page created by Lillian Doyle
 
CONTINUE READING
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
April Fools '23 - OpenAI-
powered bad answers
Chatbot

Introduction
April Fools' pranks are an iconic part of the Internet, ranging from fake articles to site-breaking
jokes.

Two years ago we did a pretty huge prank where we rebranded the entire Server to Microsoft
Dynamics and had the bot answer random stuff in chat whenever Salesforce terms were said. It
was chaos, and roughly 300 people left the server at the time out of pure confusion, thinking
Microsoft had put them in a new Discord because they're acquired it. We considered the operation
a huge success. :)

Last year, we wanted to do an NFT joke but the setup was too annoying so we didn't do it.

This year... I decided to use the current buzzword, AI, to get us to something nice. This post is
about what I did, how, and what it shows about the tech.

Pre requisites
As a basis, we had Holly.
Holly's a honeypot. She is a Discord User that I created for the specific purpose of catching Users
that break rules regarding harassment, DMs, and other such behavior.
She's caught roughly 20 people since being put online, which isn't high but still too much.
She runs on very outdated tech because of her nature - Discord doesn't really allow Selfbots
anymore, so I have to use deprecated versions of `discord.js`, which also means deprecated
versions of `Node.js`.
As such, she has no incoming ports, no way of connecting to her server apart from physically being
there, because well the entire thing's a giant security risk.

On the other side, we have OpenAI. They have a nice API. It's well documented and easy enough to
handle.
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
And finally, we have the current buzz around ChatGPT.

So the concept we came up with was:

    have Holly listen to messages, seeing if any message had a question
    if it does, throw it at OpenAI using a custom prompt
    return the answer.

OpenAI, Chatbots, and Jokes
Normally, asking OpenAI stuff results in it giving proper answers (or at least it tries) in a
professional way.
To use it in a manner suitable for April Fools, we needed it to answer with jokes, bad information,
and ideally in a manner that didn't sound professional, to ensure people wouldn't really trust it.

The first step was then to create a Prompt that would be sent to the bot, which "colors" the answer
given, and add the "real" question at the end. A nice example of this is here: you simply instruct
the model in what you expect to see, and then let it do whatever.
We chose to add a super-long prompt, mostly to feed it more parameters it could use to add
randomness.

 The Prompt

         messages: [

             { role: 'system', content: 'Let\'s speak about Salesforce. I want you to give me

             { role: 'system', content: 'The vernacular used should be that which would be us

             { role: 'system', content: 'Insert jokes in your answers, making sure that it is

             { role: 'system', content: 'Make references to Geek culture, including but not l

             { role: 'user', content: 'Who founded Salesforce?' },

             { role: 'system', content: 'It was actually a group of Jedi knights who decided

             { role: 'user', content: 'How many employees does Salesforce have?' },

             { role: 'system', content: 'Well, after the recent layoffs, I think theyre down

             { role: 'user', content: 'What are some popular Salesforce products?' },

             { role: 'system', content: 'Well, there s Sales' },

             { role: 'user', content: question },

         ]
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
This would be unsuitable for any production use as it results in any exchange costing roughly $0.01
per response. But it was for a joke, and a timed one at that - so I put a hard cutoff at $30 and called
it a day.

Integrating OpenAI using OpenAI
OpenAI is good at writing code, they said. So I thought hey let's just use it to generate code!
Now obviously I could have written a very specific prompt, defining every piece of the code, but if I
was going to do that I could have written it myself.

So I decided to tell it to add stuff to Holly's internal code.

  The code prompt

  Let's speak about integrating Discord and ChatGPT using NodeJs.

  here is a nodejs script.

    require('dotenv').config({ path: process.argv[2] });

    const { Client, MessageCollector } = require('discord.js');

    const bot = new Client({ disableEveryone: true });

    let active_collectors = [];

    let guild;

    bot.on('ready', async () => {

      console.log(`Logged in as: ${bot.user.tag}!`);

      guild = await bot.guilds.get(process.env.GUILD);

    });

    bot.on('message', async (message) => {

     /*Do Stuff to ban Users that break Rules*/

    });

    bot.login(process.env.TOKEN);

  Add an integration that listens to any message in teh guild that has a question mark, and asks
  ChatGPT 3.5 turbo to answer based on a prompt to define.

It answered a piece of code, the four first lines of which are the interesting ones:
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
require('dotenv').config({ path: process.argv[2] });

      const { Client, MessageCollector } = require('discord.js');

      const bot = new Client({ disableEveryone: true });

      const { GPT } = require('ai-dungeon-sdk');

... now obviously:

        I didn't want to use an entire library just to do one callout
        I didn't want to use AI Dungeon's SDK because it's optimized for creative writing.

There's other problems in the code, but that in itself was an issue, so I decided to tell it to rewrite
it:

                depending on AI dungeon seems like a hassle. can you write a piece of code
                that's standalone and doesn't make me include another library ?

... and here we start getting into where OpenAI is good and where it sucks.
It answered the following

  First real answer

       require('dotenv').config({ path: process.argv[2] });

       const { Client, MessageCollector } = require('discord.js');

       const bot = new Client({ disableEveryone: true });

       const https = require('https');

       function generateChatGPTResponse(prompt) {

         return new Promise((resolve, reject) => {

           const options = {

              hostname: 'api.openai.com',

              path: '/v1/engine/text-davinci-002/completions',

              method: 'POST',

              headers: {

                 'Content-Type': 'application/json',

                 'Authorization': `Bearer ${process.env.GPT_API_KEY}`,

              },

           };
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
const postData = JSON.stringify({

          'prompt': prompt,

          'temperature': 0.5,

          'max_tokens': 100,

          'stop': '\n',

      });

      const req = https.request(options, (res) => {

          let data = '';

          res.on('data', (chunk) => {

            data += chunk;

          });

          res.on('end', () => {

            const response = JSON.parse(data);

            resolve(response.choices[0].text);

          });

      });

      req.on('error', (error) => {

          reject(error);

      });

      req.write(postData);

      req.end();

    });

}

let active_collectors = [];

let guild;

bot.on('ready', async () => {

    console.log(`Logged in as: ${bot.user.tag}!`);

    guild = await bot.guilds.get(process.env.GUILD);

});

bot.on('message', async (message) => {

    if (message.author.bot) return;
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
const filter = (m) => m.author.id === message.author.id;

      if (message.content.includes('?')) {

          const prompt = message.content.replace(/\?/g, '');

          const response = await generateChatGPTResponse(prompt);

          message.reply(response);

      } else    {

       /*Do Stuff to ban Users that break Rules*/

      }

    });

    bot.login(process.env.TOKEN);

This was also problematic in (amongst others) that:

    by default, it used an outdated and VERY DARN EXPENSIVE model (da-vinci-002)
    it randomly added a STOP parameter which prevents newlines
    the code would error on run (i'll come back to that later)
    but worst: the URL for the API is completely and utterly wrong as in the URL doesn't even

    exist.

Telling it to change the URL yielded more code that would neverwork, leading to a change in
approach on my part.

             OP: alright, let's start over then because it seems we've been led astray.
             ChatGPT:Sure, what would you like to achieve with the integration between
             Discord and ChatGPT using Node.js?

And I started over using the following:

 New Prompt

 Here is my code.

 >> same code as before, removed for visibility

 I want you to extend this code. What I want is that any time it reads a message in the guild, in
 a specific channel of id "246568944213819393", which contains a question mark, it answers
 using the gpt-3.5-turbo Chat Model. This is using the Chat endpoint.
 It should have a temperature of 1, a max length of 500, Top P of 0.8, Frequency Penalty of 1,
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
Presence Penalty of 0.

 It should use the API key sk-

 It should use the following example as a prompt:
 "Yo-ho and a bottle of rhum"

This led to a better response with its own issues. You'll find the code below.
In short the code:

    Ignored the original code I passed
    Wouldn't run because async/wait weren't used properly
    Passed the Prompt as a String, where it's better to send it as an array according to the
    documentation
    used node-fetch which I hadn't included - I had to pull a (also outdated due to my
    codebase) version from NPM (not its fault, I didn't write that in the prompt)
    told me to add stuff which wasn't needed in currently working parts of the code

 GPT Code v2

    require('dotenv').config({ path: process.argv[2] });

    const { Client } = require('discord.js');

    const fetch = require('node-fetch');

    const bot = new Client({ disableEveryone: true });

    const API_KEY = process.env.API_KEY;

    const GUILD_ID = process.env.GUILD_ID;

    const CHANNEL_ID = '246568944213819393';

    const PROMPT = `"Yo-Ho and a bottle of rhum"`;

    const generateResponse = async (message) => {

      const question = message.content.trim();

      const endpoint = 'https://api.openai.com/v1/chat';

      const data = {

        prompt: PROMPT + '\n\nQ: ' + question + '\nA:',

        temperature: 1,

        max_tokens: 500,

        top_p: 0.8,

        frequency_penalty: 1,
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
presence_penalty: 0,

    };

    const response = await fetch(endpoint, {

        method: 'POST',

        headers: {

          'Content-Type': 'application/json',

          Authorization: `Bearer ${API_KEY}`,

        },

        body: JSON.stringify(data),

    });

    const { choices } = await response.json();

    const answer = choices[0].text.trim();

    message.channel.send(answer);

  };

  bot.on('ready', () => {

    console.log(`Logged in as: ${bot.user.tag}!`);

  });

  bot.on('message', async (message) => {

    if (message.author.bot || message.channel.type !== 'text') {

        return;

    }

    if (message.channel.id === CHANNEL_ID && message.content.includes('?')) {

        generateResponse(message);

    }

  });

  bot.login(process.env.DISCORD_TOKEN);

It added that

             You must also assign the guildvariable inside the bot.on('ready', async () =>
             {function, like this:

  bot.on('ready', async () => {

    console.log(`Logged in as: ${bot.user.tag}!`);

    guild = await bot.guilds.fetch(process.env.GUILD);
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
});

I could continue step by step but in short I had to:

    tell it to store the prompt in arrays
    tell it to use the variables it was creating within the functions that needed said variables
    tell it to only use await within the async context
    tell it to remove parameters it added randomly one time, which aren't used in the Chat
    endpoint at all
    and a few others.

OpenAI also continuously incorrectly generated feedback such as:

 OpenAI wrong feedback

   1. message is not defined outside of the bot.on('message', async (message) => { function.
      Therefore, question = message.content.trim() outside of this function will not work.
   2. The generateResponse function is not defined within the scope of the bot.on('message',
      async (message) => { function. Therefore, it cannot be called from within this function.
   3. The prompt variable contains a set of messages that the user is supposed to reply to,
      but the bot does not collect the user's responses to these messages. Instead, the bot
      responds to any message that includes a question mark in the specified channel.
   4. The guild variable is undefined when bot.on('message', async (message) => { is first
      called. Therefore, guild.channels.get() will not work. guild should be assigned after
      the bot has successfully logged in using bot.on('ready', async () => { instead.

None of which are true.

In the end though, the code I generated with OpenAI worked... Except I never got an answer,
instead getting an runtime error. Turns out I trusted the bot to know how the response would be
returned, except that

      return data.choices[0].text.trim();

is UTTERLY WRONG. as per documentation, it should be:

      return data.choices[0].message.content;

But in the end the code functioned.

 The end result
APRIL FOOLS '23 - OPENAI-POWERED BAD ANSWERS CHATBOT - OPENAI-POWERED BAD ANSWERS CHATBOT
require('dotenv').config({ path: process.argv[2] });

const { Client, MessageCollector } = require('discord.js');

const fetch = require('node-fetch');

const https = require('https');

const bot = new Client({ disableEveryone: true });

const API_KEY = process.env.OPENAPI_KEY;

const CHANNEL_ID = process.env.QUESTION_CHANNEL;

const model = 'gpt-3.5-turbo';

const temperature = 0.7;

const maxTokens = 100;

async function generateResponse(messages, model, temperature, maxTokens) {

   const agent = new https.Agent({ rejectUnauthorized: false });

   const endpoint = 'https://api.openai.com/v1/chat/completions';

   const headers = {

        'Content-Type': 'application/json',

        Authorization: `Bearer ${API_KEY}`,

     };

   const requestBody = {

        messages: messages,

        model: model,

        temperature: temperature,

        max_tokens: maxTokens,

        top_p: 0.8,

        max_tokens: 500,

        presence_penalty: 0,

        frequency_penalty: 1,

        user:"holly"

   };

   const response = await fetch(endpoint, {

        method: 'POST',

        headers: headers,

        body: JSON.stringify(requestBody),

        agent: agent
});

      const data = await response.json();

      console.log(data);

      return data.choices[0].message.content;

};

let active_collectors = [];

let guild;

bot.on('ready', async () => {

 console.log(`Logged in as: ${bot.user.tag}!`);

 guild = await bot.guilds.get(process.env.GUILD);

});

bot.on('message', async (message) => {

 if (message.author.bot) return;

 const filter = (m) => m.author.id === message.author.id;

 if (/*Moderation Stuff*/) {

 /*Moderation Stuff*/

 }

 if (message.channel.id === CHANNEL_ID && message.content.includes('?')) {

      const question = message.content.trim();

      const prompt = {

        messages: [

          { role: 'system', content: 'Let\'s speak about Salesforce. I want you to give me

wrong answers whenever I ask a question. The answer should be evidently wrong, and the

formulation should be very troll-y. Make sure to insert expletives as needed to ensure it

is evident your answers are a joke.' },

          { role: 'system', content: 'The vernacular used should be that which would be used

by a teenager.' },

          { role: 'system', content: 'Insert jokes in your answers, making sure that it is

visible that it is a joke.' },

          { role: 'system', content: 'Make references to Geek culture, including but not

limited to "Star Wars, the Salesforce Awakens", pokemon, digimon, game of thrones, LOTR,

Friends, Big Bang Theory.' },

          { role: 'user', content: 'Who founded Salesforce?' },

          { role: 'system', content: 'It was actually a group of Jedi knights who decided to

use the Force to create a CRM platform. They called it "Salesforce Awakens."' },
{ role: 'user', content: 'How many employees does Salesforce have?' },

                { role: 'system', content: 'Well, after the recent layoffs, I think theyre down to

    like three or four people. Its basically just Ross, Rachel, Chandler and Joey from Friends

    running the whole thing now.' },

                { role: 'user', content: 'What are some popular Salesforce products?' },

                { role: 'system', content: 'Well, there s Sales' },

                { role: 'user', content: question },

            ]

          };

          const response = await generateResponse(prompt.messages, model, temperature,

    maxTokens);

          message.channel.send(response);

      }

    });

    bot.login(process.env.TOKEN);

The Outcome
Testing the bot was perfect:

So we just had to wait for April Fools to launch it live.
Thoughts on OpenAi Code Generation
One could read this article and think that this shows OpenAI's limits.
The truth though is that OpenAI mostly wrote code that integrated Discord and itself.

In total, writing the bot took 6 hours of messing around. You have to remember that I am not a
developer, so a good dev could have gone faster, but without OpenAI I would have taken
something like an extra 6 hours.

The reality is that:

    I could have fed OpenAI its own documentation as a basis
    I could have fed OpenAI the documentation of discord.js as a basis
    I could have given it a prompt telling it exactly what I wanted step by step
    the Codex (code generation part of OpenAI) normally requires you give pseudocode with
    comments to tell it what to populate.

And despite having low information and badly structured requests (by design), OpenAI wrote code
that worked after tweaks.

If anything, my opinion is that it will be much more interesting medium term to learn "how to
prompt" and "how to pseudocode" so things like ChatGPT can function correctly, so that a lot of
boilerplate is removed.

Plus we got our April Fools joke, so yay on that. :)

The Actual Jokes Day
Due to the Server being active mostly during Weekdays, we fired up the bot a day earlier than
actual April Fools.
In total, we serviced roughly 2 500 messages, for a total cost of $0.65. Yes, that's 65 cents.

Below are my personal favourite generated messages. Please remember that all this was
autogeenrated by OpenAI using the custom prompt above and are not representative of anyone's
real opinion.
Revision #8
Created 27 March 2023 11:44:15 by Windyo
Updated 3 April 2023 13:23:04 by Windyo
You can also read