Generative AI: A New Frontier for the IoT

Share this post

Generative AI: A New Frontier for the IoT

It’s debatable when the conversation around mimicking human thinking using machines began. Certainly, for years to come, generative AI will be explored by teachers, doctors, law enforcement, scientists, gamers, and more to automate workflows. Last month we shed light on the newest hype in AI—ChatGPT – Take It or Leave It. With increasing discoveries in the application of generative AI in our lives, we are challenged with observing the balance of AI being used for good and bad.

For example, AI can be applied as an automated solution for teachers to grade papers, providing them with more one-on-one time with students. However, AI applications can threaten the academic and creative integrity of writers and artists. Moreover, AI-generated cyber-attacks, content, and images can elicit anxiety and undermine netizens (citizens that seek to make the internet a better place) across the information domain when applied without the proper guardrails and precautions.

The More We Know, the Less We Know 

Generative AI uses text prompts to generate data. As mentioned last month and in previous blogs, AI technology is being exploited by criminals to deliberately misguide users online and sometimes unwittingly accelerates the spread of misinformation. Deepfakes, or human image synthesis technique, require OSINT operators to meticulously consider the authenticity of any image, audio, or video data they collect while online. Social media AI algorithms feed relevant topics to users based on content they’ve visited previously, whether its harmful or misleading. This fundamental shift in technology has rearranged industries in unprecedented ways, like AI-generated extremist video games for the purpose of spreading propaganda to new audiences.

It is up to the national security community, in partnership with industry innovators, to change the pace of technology rather than ride on the coattails of adversaries.

What’s the Latest? 

You may have heard that Microsoft, through partnership with OpenAI, is building upon the chatbot craze in its Edge browser to change what we know of online search. Microsoft is giving their chatbot a personality, allowing you to request that an enthusiastic, professional, or funny tone be applied to its response to your question.

The goal of this type of technology is to enable recommendations and conversations based on information cited from various online resources. It is intended to learn human preferences—think like a person. It’s expected that Microsoft will spread this capability throughout their technology suite, but we may not see it in the form of Clippy 2.0. Darn.

What’s Google doing? In one word: Bard. This month Google introduced Bard, their conversational AI service, to testers and promises to make it publicly available in coming weeks. Google aims to set the bar high for the quality of generated data. Google has a history of investing in understanding the human language, with models like BERT and MUM.

As we turn to our search providers for deeper insights into our questions, they strive to respond with more “thoughtful” answers.

The Solution 

With the emergence of generative AI, untrained and non-sophisticated bad actors can introduce a greater amount of harm and propaganda online with significantly less effort and skill. As generative AI transforms the internet of things in extraordinary ways, the national security community must lead with game-changing technology.

Generative AI poses real challenges today to the national security community, requiring consideration to the authenticity of any data collected online. Ntrepid helps you understand the risks of operating online without a managed attribution platform and integrated data collection applications, along with lessons in OSINT techniques to safely operate in today’s demanding online environment.