Skip to main content

Located near the CIA headquarters in Langley, Virginia, is a sculpture called Kryptos, which has been in place since 1990. This sculpture contains four secret codes, three of which have been solved, while the fourth remains unsolved after 35 years. According to a report by Wired, the sculptor, Jim Sanborn, wants to make it clear that solving the final code will not be accomplished with the help of a chatbot.

Sanborn, who has created sculptures for notable institutions such as the Massachusetts Institute of Technology and the National Oceanic and Atmospheric Administration, in addition to his work outside the CIA headquarters, has reportedly received a large number of submissions from individuals claiming to have solved the final panel of unsolved code, known as K4. However, these individuals are not professional cryptanalysts or enthusiasts who have been dedicated to decoding the message since its initial appearance. Instead, they are people who have simply run the code through a chatbot and are convinced that the AI’s output is the correct solution.

In a conversation with Wired, Sanborn expressed his frustration, stating that he has seen a significant increase in submissions, which is already irritating for a 79-year-old who has had to charge a $50 fee for reviewing solutions due to the numerous submissions he has received over the years. Moreover, Sanborn is disappointed by the attitude of the submitters, who are often overly confident in their supposed solutions.

As Sanborn noted, “The tone of the emails is different – people who used AI to crack the code are completely convinced that they have solved Kryptos over breakfast.” He added, “So, they are all very confident that by the time they reach me, they have cracked it.”

Sanborn shared several examples of the arrogant and self-satisfied messages he has received in recent years, including:

“I’m just a vet… Cracked it in days with Grok 3.”

“What took 35 years and even the NSA with all their resources could not do, I was able to do in only 3 hours before I even had my morning coffee.”

“History’s rewritten… no errors, 100% cracked.”

If you have spent time on social media, particularly on X, you may have encountered individuals with a similar attitude. They often respond to posts with “Just Grok it” or share screenshots of ChatGPT’s responses as if they are contributing something meaningful to the conversation.

The smugness exhibited by these individuals is, frankly, perplexing. Even if they had successfully cracked Sanborn’s code using AI (which Sanborn claims they have not come close to doing), what is it about relying on a machine to do the work that generates such self-satisfaction? It would be one thing if they had trained a large-language model on extensive encryption knowledge and used it to crack the code. However, they are simply asking a chatbot to look at a picture and solve it, which is the least clever approach imaginable. It is equivalent to looking up the answers in the back of a textbook, except in this case, the textbook has provided a confidently incorrect solution.

This behavior is not uncommon. A study published last year in the journal Computers in Human Behavior found that when people learn that advice was generated by AI, they tend to over-rely on it, even if it contradicts contextual information and their own personal interests. The same study discovered that over-reliance on AI advice negatively affects human interactions. Perhaps it is because these individuals are so pleased with themselves.


Source Link