In a recent investigative report, Forbes revealed that Social Links, a Russia-linked spyware company previously banned from Meta’s platforms for alleged surveillance activities, has co-opted ChatGPT for spying on people using the internet. This unsettling revelation of ChatGPT which involves collecting and analyzing social media data to gauge users’ sentiments, adds yet another controversial dimension to ChatGPT’s use cases. As per Forbes, Social Links was founded by Russian entrepreneur Andrey Kulikov in 2017, and now has offices in the Netherlands and New York; previously, Meta dubbed the company a spyware vendor in late 2022, banning 3,700 Facebook and Instagram accounts it allegedly used to repeatedly scrape the social sites. Social Links is an open-source intelligence (OSINT) software provider. Presenting its unconventional utilization of ChatGPT at a security conference in Paris, Social Links showcased the chatbot’s proficiency in text summarization and analysis. By feeding data, obtained through its proprietary tool, related to online discussions about a recent controversy in Spain, the company demonstrated how ChatGPT could quickly process and categorize sentiments as positive, negative, or neutral. The results were then presented using an interactive graph. Privacy advocates, however, find this development deeply troubling. Beyond the immediate concerns raised by this specific case, there is a broader worry about the potential for AI to amplify the capabilities of the surveillance industry. Rory Mir, Associate Director of Community Organizing at the Electronic Frontier Foundation, expressed apprehension that AI could enable law enforcement to expand surveillance efforts, allowing smaller teams to monitor larger groups more efficiently. Mir highlighted the existing practice of police agencies using fake profiles to infiltrate online communities, causing a chilling effect on online speech. With the integration of AI, Mir warned that tools like ChatGPT could facilitate quicker analysis of data collected during undercover operations, effectively enabling and escalating online surveillance. A significant drawback noted by Mir is the track record of chatbots delivering inaccurate results. In high-stakes scenarios like law enforcement operations, relying on AI becomes precarious. Mir emphasized that when AI influences critical decisions such as job applications or police attention, biases inherent in the training data—often sourced from platforms like Reddit and 4chan—become not just factors to consider but reasons to reconsider the use of AI in such contexts. The opaque nature of AI training data, referred to as the “black box,” adds another layer of concern. Mir pointed out that biases from the underlying data, originating from platforms notorious for diverse and often extreme opinions, may manifest in the outputs of the algorithm, making its responses potentially untrustworthy. The evolving landscape of AI applications in surveillance raises important questions about ethics, biases, and the potential impact on individual freedoms and privacy. Update: Since the publishing of the piece, Social Links reached out to Firstpost and stated that it is a privately held American company, that its intellectual property is based in the US, with its data centre located in Helsinki, and that its services and entire infrastructure have been situated in Europe for more than three years. A representative of Social Links also stated that they have headquarters in the US, offices in Miami, and Amsterdam, and an R&D centre in Latvia. Furthermore, Social Links denied having any presence or staff in the Russian Federation and there are no operations or business relations in Russia. Furthermore, they stated that barring the origin of the founders, Social Links has no association with Russia. Social Links also claimed that ‘Meta’s strong accusation requires substantiation with evidence which, despite their claim, they have not provided.’ (With input from agencies)
Meta has accused a Russia-linked company that has an expertise in online hacking and spying of bypassing OpenAI’s ChatGPT and turn it into a spyware for spying on people who use the internet. The company was involved in sentiment analysis and hacking
Advertisement
End of Article


)

)
)
)
)
)
)
)
)
