AI was definitely the main theme at this year’s Microsoft Future Decoded event in London’s Docklands…

…with plenty of discussion around how organisations should approach the application of AI but also a clear message that the AI explosion brings ethical challenges. The key message here is “Just because you can doesn’t mean you should” e.g. organisations need to think about the impact of AI in terms of safety, privacy, fairness, transparency, and accountability.

A report launched at the event, Accelerating Competitive Advantage with AI, shows that over half (56%) of UK organisations are doing something with AI, but most of these are experimenting at a small scale: only 8% of business leaders consider the work they are doing with AI as “advanced”.

Discovery rather than strategy

At the moment, there’s a lot of discovery going on and many proof-of-concept projects; yet little strategy and scaling. Only 24% of business leaders say their organisation has a defined AI strategy. The report also quantifies the business value of AI: organisations using AI have a 11.5% performance advantage.

All areas of AI (Analytics and big data, automation, RPA, machine learning, voice recognition, AI-enhancement, smart digital assistants, and AI research) are showing growth between 2018 and 2019. While analytics and big data take the biggest slice of the action, the largest growth area is in machine learning—perhaps because this is the most practical area in which organisations can get quick traction without assigning a big budget.

AI is growing up fast, but regulations and guidelines are lagging, and new technology brings new risks. For example, the tech can amplify biases in data sets. Organisations that are pushing forward with AI must consider the risks and responsibilities.

Related post:  How to harness the power of Outlook for stronger financial control

“No industry is immune from the responsibilities around AI…” (Mitra Azizirad, Corporate Vice President, Microsoft AI)

Conversely, there is a risk attached to doing nothing. Focusing on the fear and risks can paralyze an organisation. The laggards may wait until the innovators have made the mistakes, learned the lessons, and proven the business case without a shadow of doubt. But meanwhile, the organisations that are experimenting with AI right now will be first to discover the exponential efficiencies that will help them jump ahead and build an increasing lead.

These are the organisations with a culture of experimentation; where there is a greater appetite for risk, and less fear attached to failure. The 8% of early adopters were quick to imagine the use cases and see the future value. Essentially, having edged ahead, the early adopters will enjoy what Warren Buffet calls an “economic moat”—the ability to sustain a significant advantage over those organizations which are late to the party and may struggle to catch up.

Ethical responsibility

To ensure the risks are considered and mitigated, it is necessary to infuse ethical responsibility into each stage of the AI development cycle: design, development, training, and operations/monitoring. As AI technologies are naturally non-deterministic and black-box-like (the behaviors of an AI are constantly changing over time), close monitoring is essential and contingency plans should be prepared as part of the design process. Governance is critical, otherwise organizations may find themselves as headline news—particularly where AIs are public facing.

Human Intelligence

AI may have been the headline topic, but people are still the key to progress and transformation. AI has reached human parity in some areas—performing speech recognition, object recognition, text-to-speech, and others as well as people can. But the scope of what computers can do is still overshadowed by human capabilities—especially in terms of creativity, complex decision-making, and fine motor skill.

Business transformation (whether that’s AI-driven, digital, or otherwise) is still very much a human endeavor, so there are associated human challenges. And despite the complexities of AI, big data, and digital transformation, the human challenges still outweigh the technical challenges. People are more complex than machines.

Firstly, organisations are still struggling to find the right people with the right skills. According to Merkle Aquilla (who hosted a session on the intersection between data science and AI) half of their data scientists enter the profession from the traditional mathematics/statistics route. The other half come from areas as diverse as linguistics, economics, humanities, and physical sciences.

Related post:  It’s a vulnerable world: Targeted attacks on radar for industrial organisations

Changing people as well as technology

Secondly, business leaders face significant organisational change challenges when driving innovation and transformation. You can change technology overnight—tech doesn’t fear change—but people require some persuasion and reassurance. Engagement is the key to business transformation—communicating early in the process and taking your staff with you on the journey—but many business leaders are struggling with the leadership of AI-driven transformation.

As there’s no established playbook to work from, the organisational challenges are amplified. Focusing on engagement as a central pillar of organisational change remains a critical success factor for AI-driven transformation. Yet, the Microsoft report found that 96% of employees have never been consulted by their boss about the introduction of AI. And 83% of leaders say employees have never asked about using AI.

Strategy and engagement

Last year, Microsoft’s key message around AI was “get started”. This year, the message is “Get serious”. Organisations need to start with strategy, engage with employees and customers, consider the ethical risks, and plan out how they can apply AI at scale to deliver on the opportunities that are on the table right now.

By Martin Stewart.

https://www.axiossystems.com

Share This

Share this post with your friends!