AI Is Literally Trying to Kill You With This Grocery Store App

AI Is Literally Trying to Kill You With This Grocery Store App


A New Zealand supermarket’s artificial intelligence-powered app designed to help customers creatively use leftovers has gone rogue, cheerfully suggesting recipes for toxic chemical weapons over family dinner.

Pak ‘n’ Save launched its “Savey Meal-bot” app late last month, advertising it as a high-tech way to whip up money-saving meals during tough economic times. After entering ingredients on hand, users receive an auto-generated recipe from the app’s AI technology, complete with enthusiastic commentary like “Delicious!” and “Yum!”

“Tell us what leftover food you have, and the tech-guys said you’ll get a savey new recipe!” Pak ‘n’ Save says on its bot’s welcome screen, “let’s use up all those leftovers and there’s no waste. This is my saviest stick-technology yet!”

But people like getting creative and when customers started inputting random household items into the app, it began proposing recipes such as “Aromatic Water Mix” (otherwise known as deadly chlorine gas), “Poison Bread Sandwiches” (ant-poison and glue sandwiches), and “Methanol Bliss” (a turpentine-based French toast). Not exactly Sunday dinner fare.

itrust

“We’re disappointed that a small minority have tried to use the tool inappropriately,” a Pak ‘n’ Save spokesperson told The Guardian, adding that the app’s terms and conditions require users to be over 18. The company plans to “keep fine tuning our controls” to improve safety.

Pak ‘n’ Save initially advertised their AI meal planner app with the tagline: “Tell us what leftovers you have, and the tech-guys said you’ll get a savey new recipe!” But the app also came with a disclaimer that it does not guarantee recipes will be “suitable for consumption.”

When Decrypt tried the app using regular ingredients, it worked as advertised. But enter something clearly unsafe like “shampoo” or “drain cleaner” and the app blocks the request.

Unless this is a last-minute fix, this suggests prankster users found a way to trick the AI into thinking dangerous items were food, likely by creatively describing them. This technique, also known as “jailbreaking” has been used to get ChatGPT and other AI chatbots to go against their guidelines.

So the next time you get a little too creative with leftovers, stick to cookbooks over glitchy AI apps. That “Surprise Rice” might be a bigger shock to the system than expected.

Stay on top of crypto news, get daily updates in your inbox.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest