Is it time to introduce ethics into the agile startup model? – TechCrunch

The startup’s rocket ship path is well known: get an idea, build a team and put together a minimum viable product (MVP) that you can get in front of your users.

However, today’s startups need to reconsider the MVP model because artificial intelligence (AI) and machine learning (ML) are becoming ubiquitous in tech products and the market is increasingly growing awareness of the ethical implications of increasing AI or replacing humans in the process of making the decision.

An MVP allows you to gather critical feedback from your target market that then informs you of the minimum development required to launch a product – creating a powerful feedback loop that drives today’s customer-led business. This nimble, agile model has been very successful over the past two decades – launching thousands of successful startups, some of which have grown into multi-billion dollar companies.

However, building high-performance products and solutions that work for the majority is not enough anymore. From facial recognition technology that biases people of color to credit-lending algorithms that discriminate against women, the past several years have seen many products powered by artificial intelligence or machine learning wiped out due to ethical dilemmas that emerge after millions of dollars have been directed into their development and marketing. In a world where there is only one opportunity to bring an idea to market, this risk can be fatal, even for well-established companies.

Startups do not have to abandon the lean business model in favor of a more risk-averse alternative. There is a middle ground that can bring ethics into the startup mindset without sacrificing the agility of the lean model, and it starts with the startup’s initial goal – getting an initial proof of concept in front of potential customers.

However, instead of developing an MVP, companies should develop and roll out an Ethically Viable Product (EVP) based on Responsible Artificial Intelligence (RAI), an approach that takes into account ethical, moral, legal, cultural, sustainable, and social and economic considerations while developing, deploying, and using AI systems/ machine learning.

And while this is good practice for startups, it is also good standard practice for big tech companies that build AI/machine learning products.

Here are three steps startups—particularly those that integrate AI/ML technologies into their products—can use to develop an EVP.

Looking for an Ethics Officer to lead the mission

Startups have chief strategy officers, chief investment officers — and even chief entertainment officers. The Chief Ethics Officer is just as important, if not more so. This person can work across various stakeholders to ensure that the startup develops a product that matches the ethical standards set by the company, the market, and the public.

They should act as a liaison between the founders, the executive group, the investors and the board of directors with the development team – making sure that everyone is asking the right ethical questions in a thoughtful way and avoiding risk.

The machines are trained on the basis of historical data. If there is a systemic bias in an existing business process (such as unequal lending practices based on race or gender), the AI ​​will pick up on that and believe that this is how it should continue to behave. If it later turns out that your product does not meet the ethical standards of the marketplace, you cannot simply delete the data and find new data.

These algorithms have already been trained. You cannot erase this influence any more than any 40-year-old man can undo the influence of his parents or older siblings on his upbringing. Through thick and thin, you are stuck with the results. Senior Ethics Officers need to sniff out this bias that is ingrained throughout the organization before it becomes ingrained in AI-powered products.

Incorporating ethics into the entire development process

Responsible AI is not just a point in time. It is a comprehensive governance framework that focuses on the risks and controls of the AI ​​journey to AI. This means that ethics must be integrated into all stages of the development process – from strategy and planning to development, deployment and operations.

While defining the scope, the development team should work with the Chief Ethics Officer to be aware of the general ethical principles of AI that are valid behavioral principles in many cultural and geographical applications. These principles describe, suggest, or inspire how AI solutions should behave when faced with decisions or ethical dilemmas in a particular field of use.

Above all, a risk and damage assessment should be undertaken, to identify any risk to a person’s physical, emotional or financial well-being. The evaluation should also look at sustainability and assess the damage an AI solution might do to the environment.

During the development phase, the team must constantly ask how their use of AI aligns with the company’s values, whether the models treat different people fairly and whether they respect people’s right to privacy. They must also consider whether their AI technology is safe, secure, and robust and how effective the operating model is in ensuring accountability and quality.

The data used to train the model is an important component of any machine learning model. Startups should be concerned not only with the MVP and how to prove the model in the beginning, but also the final context and geographic scope of the model. This will allow the team to select the correct representative data set to avoid any issues with data bias in the future.

Don’t forget ongoing AI governance and regulatory compliance

Given the implications for society, it is only a matter of time before the European Union, the United States, or any other legislative body passes consumer protection laws governing the use of artificial intelligence/machine learning. Once the law is passed, these protection measures will likely spread to other regions and markets around the world.

It’s happened before: The passage of the European Union’s General Data Protection Regulation (GDPR) has led to a flurry of other consumer protection measures around the world that require companies to prove they consent to the collection of personal information. Now, people from across the political and business spectrum are demanding ethical guidelines around artificial intelligence. Once again, the European Union is leading the way after the 2021 proposal for the legal framework for artificial intelligence was put forward.

Startups deploying AI/machine learning-powered products or services must be willing to demonstrate ongoing governance and regulatory compliance – ensuring that these processes are built now before regulations are imposed on them later. A quick survey of proposed legislation, guidance documents and other relevant guidelines prior to building a product is a necessary step of an EVP.

In addition, it is advisable to reconsider the regulatory/political landscape prior to launch. Having someone who is an integral part of the active deliberations currently taking place globally on your board of directors or advisory board would also help in understanding what might be going on. Regulations are coming, and it’s good to be prepared.

There is no doubt that artificial intelligence/machine learning will provide enormous benefit to humanity. The ability to automate manual tasks, simplify business processes, and improve customer experiences is too great to refuse. But startups need to be aware of the effects of AI/machine learning on their customers, the market, and society at large.

Startups usually only have one chance of success, and it would be a shame for a high-performance product to be wiped out because some of the ethical concerns were not revealed until after they hit the market. Startups need to incorporate ethics into the development process from the start, develop an EVP based on RAI and continue to ensure AI governance after launch.

Artificial intelligence is the future of business, but we cannot lose sight of the need for empathy and the human element in innovation.

Leave a Comment