Open Source: Invisible backbone of AI startups
AI rules should be proportionate and risk-based, especially for startups that are building and deploying AI that benefits EU consumers and businesses. Including general purpose AI in the AI Act risks throwing overboard the risk-based approach and casting blanket suspicion on AI startups, which are critical to a competitive AI landscape.
One of the hardest things about building AI startups is finding, retaining and growing expertise specific to the AI product or service that the company is focused on. These experts are in high demand and startups generally have less resources than other companies. To be able to nevertheless build services that bring innovation to businesses and consumers and disrupt big digital and analog markets, entrepreneurs make smart decisions with their limited resources.
AI startup entrepreneurs tap into the platform and open source software (software for which the original source code is freely available) ecosystem. This has proven valuable for startups and the ecosystem at large. It significantly reduces the cost of entering the market. For general purpose AI, defined broadly as AI that can be trained to perform different tasks in different situations, the open source ecosystem is particularly important. AI entrepreneurs do not have to reinvent the wheel, they can build on algorithms that are freely available and focus on innovating cutting-edge products and services. They can compete in the market of their choice, and do not have to first build a backbone infrastructure that is most likely less efficient than an established best practice.
If new EU AI rules include general purpose AI in the scope, this would deviate from the risk-based approach that guides the rest of the Act and undermines the open source ecosystem that powers AI startups. General purpose AI is, per se, not low or high risk. It is not clear why a technology-neutral approach that is proportionate (i.e. risk based) should be thrown overboard in the AI Act. Getting this wrong could lead to a chilling effect on the open-source community that drives AI development in the EU.
Including General Purpose AI in the scope of the AI Act also neglects the fact that oftentimes an entrepreneur ends up building a startup that is operating in a different sector and offering different services that they did not anticipate to design. Ergo, what may in future be classified as ‘high risk general-purpose AI’ might also have plenty of use-cases for low-risk applications, but be stuck behind the high-risk compliance barrier.
As the EU Commission’s initial AI regulation had laid out, regulating a nascent high-growth sector like AI is challenging, which makes it even more important to not accidentally drive innovation away. Widely increasing the scope of the AI Act to include general purpose AI risks sending a chilling message to the ecosystem and casts a blanket of suspicion over the open source AI community. In a way, including general purpose AI in the scope directly targets startups, who do not have big AI teams and rely on what is freely available online. We encourage reverting to the Commission’s initial proposal and sticking to the risk-based and proportionate approach in the coming negotiations.