AI’s potential dystopias: risks of specialised AI systems and more

We spoke to Henrik Plate, Security Researcher at Endor Labs, about DroidGPT, the future of AI, and much more.
Before we start, could you explain a little about Endor Labs and your role in the company?
Endor Labs gives DevSecOps teams the context they need to prioritise open source software (OSS) risk, secure CI/CD pipelines, and meet compliance objectives such as Software Bills of Materials (SBOMs). The company is dedicated to giving security leaders the visibility they need—and don’t otherwise have—into the software supply chain while using third-party and open-source software.
Here at Endor Labs, I help lead Station 9, the company’s research capability. Station 9 brings together software and security specialists–from different disciplines and around the world–to focus on critical but often neglected aspects of the software development lifecycle and software supply chain security.
There’s a lot of speculation about whether AI is going to advance to a stage where it is uncontrollable, is this a valid issue, or more of a dystopian storyline?
Such scenarios often assume the advent of artificial general intelligence (AGI), which is able to compete (and outperform) humans in a wide range of intellectual tasks, and which is able to (re)act using a variety of services connecting it with the virtual and physical world.
However, despite all the impressive advances of AI in areas like natural language processing (NLP), computer vision, or autonomous driving, we should not forget that those technologies are still very specialised in well-defined and relatively narrow tasks. They do not have the core knowledge or common sense that humans acquire starting from the earliest childhood, thanks to our sensory abilities, individual experiences or emotions.
As such, there is little risk of an autonomous or general AI “getting out of control” in the near future. But the repeated discussions around AGI distract from the fact that the extensive and broad adoption of highly specialised AI across different domains still leaves room for dystopias.
The risks of specialised AI systems taking decisions that are unethical, repeating, or amplifying biases, spreading disinformation, having unintended consequences or simply being intransparent and unexplainable are more significant and have been demonstrated on various occasions.
It is therefore important for developers of AI systems to increase the explainability and transparency of their algorithms, and for consumers to review and verify the decisions made by AI.
Additionally, it is clear that adversaries actively seek to use and misuse AI to their advantage – either by finding and exploiting a growing list of AI vulnerabilities in legitimate AI systems (e.g. through data poisoning or prompt injection attacks) or by creating and using their own AI systems, e.g. for spam campaigns or audio and image deep fakes (voice scam).
Can you explain a little about Endor Labs DroidGPT?
Endor Labs DroidGPT uses artificial intelligence to help software development and security professionals select better open-source software. It combines the power of ChatGPT with Endor Labs’ proprietary risk data to help users quickly and easily research open-source software packages through easy conversation.
Users can log in to the OSS Explorer option in the Endor Labs platform, ask simple questions like “What are the best logging packages for Java?” and find answers instantly. All results are overlaid with risk scores that encompass the quality, popularity, trustworthiness and security of each package.
Is AI the future, or is it simply the next step forward in technology until the next great invention is built?
AI is not a single monolithic technology, but a wealth of different techniques, algorithms, and approaches.
Those individual technologies go through the same, more or less accentuated hype cycle, and some of them have already reached the plateau of productivity, where they provide real value in specific areas such as recommendation systems or fraud detection in the financial industry.
Just recently, the release of ChatGPT and its application programming interface (API) pushed large language models (LLMs) to the public attention, but Endor Labs’ own experiments show some deficiencies when it comes to classifying code snippets as malicious or benign. The next months and years will show which use cases benefit from LLMs and allow for sustainable business models.
But considering all those individual technologies under the big umbrella of AI, they have the potential to fundamentally transform many areas of human life, comparable to the development of combustion engines or the miniaturisation of computer chips, both of which are also not single inventions but comprise numerous iterations and improvements over many decades.
What are the next steps for AI?
Considering the significant increases of both scientific papers published by academia as well as investments made by established industry players and start-ups alike, we will certainly see further accelerated development of AI technologies.
One interesting question is if open-source initiatives will be able to compete with proprietary efforts. Big tech has unique access to significant amounts of data and compute resources, both of which are essential for training neural networks with billions of parameters.
Still, as for many other technologies in the past, open-source software set out to become a serious contender of commercial service providers. Viable open-source solutions exist for a significant share of the AI technology stack – from data sets and foundation models to programming frameworks and vector databases. Considering how quickly the performance of open source models seems to approach GPT-4 – currently, OpenAI’s most performant variant of generative pre-trained transformers (GPT) – some go as far as to say that commercial providers will not be able to compete with the collective efforts of the open source community. Recent research, however, also indicates that performance improvements achieved by open-source contenders are not substantial, and thus cannot compete with the broader capabilities of closed language models.
The strong interest of the open-source community in AI is also visible when looking at open packages published on PyPI and npm: As part of Endor Labs’ continuous efforts to catch malicious package uploads, we also started tracking calls to the OpenAI API. Following the release of ChatGPT’s programming API at the end of January 2023, more than 636 new PyPI and npm packages using this API have been created (203 in June 2023 alone). On top of those come another 276 that existed before and added ChatGPT support.
Another big change will arise from increased regulatory efforts such as the EU AI Act, which reflects the growing concerns of politicians regarding fair and explainable AI. This may have an impact on AI technology and service providers comparable to GDPR’s impact regarding the handling of personally identifiable information (PII), which resulted in significant investments of software and service providers to comply with GDPR requirements. In fact, GDPR already contains provisions that also affect AI systems, e.g. that ‘algorithmic systems should not be allowed to make significant decisions that affect legal rights without any human supervision.’
