Article

The Importance of Balancing Risk and Innovation with Generative AI

Too many companies are focusing on one, not both. Here are 3 lessons we’ve learned through our early experimentation.

March 16, 2023

Generative AI products like ChatGPT have introduced a new era of competition to almost every industry. As business leaders seek to quickly adopt ChatGPT and other products like it, they are shuffling through dozens, if not hundreds, of use cases being proposed.

What are we seeing? Companies typically fall into one of two camps. The first camp: Test generative AI far and wide. The second: Allow minimal use by a small group of people, if at all. (Some companies, like JP Morgan Chase, have banned it outright.)

My take: The companies that strike the right balance of risk and innovation when adopting generative AI will win. The question is, how do you find the right balance for your business?

At West Monroe, we’re asking that very question. We are actively piloting different uses for generative AI via a strategy of learning fast—and failing fast—that will hopefully lead to a unified, company-wide strategy for adoption. At the same time, we’re heavily in tune with the associated risks, leaning on our legal and risk teams to understand the implications of these pilots.

I’m very familiar with how West Monroe is navigating this process—not only as the leader of our Technology practice, but as the head of our generative AI taskforce. We’ve pulled together a group of individuals across the firm—call it a committee, a multidisciplinary team, or a task force—that has representation from many areas of our company who are actively interested in advancing the technology.

There is plenty of pressure from different departments to develop and test their use cases for generative AI as quickly as possible; this team helps prioritize those use cases while keeping our company’s goals and strategy in mind. At the same time, such excitement must be met with an appropriate amount of caution.

How do you successfully balance risk and innovation with generative AI? Here are the lessons we’ve learned so far from our approach.

1. Don’t wait to start experimenting with generative AI

According to IBM’s 2022 Global AI Adoption Index, 35% of companies are using AI in their business, and 42% are exploring AI. This equates to more than three-quarters of the surveyed companies – and this research was done six months before ChatGPT was released. More recently, 63% of companies plan to increase or maintain AI and ML spending in 2023.

The sooner a company starts developing a framework for adopting generative AI, the sooner the use cases can be rolled out and start showing ROI. Employees are excited about the potential implications this will have on productivity and efficiency.

As we saw at West Monroe, however, issues can arise when this excitement leads to a scenario where employees in various departments are using generative AI tools with no coordination and little-to-no oversight. Not only is this risky—siloed employees may not be considering the risk and liability being introduced to the company – but also inefficient, since there are bound to be redundancies.

And there is no easy way to restrict who can and cannot use generative AI in the workplace – this isn’t a singular enterprise product, everyone can sign up for tools as they see fit . The only way to truly press pause would be to block access to websites like ChatGPT, and we don’t want to be that restrictive.

Our biggest initial to-do was getting a handle on how employees are using this generative AI, determining what the “acceptable” and “too-risky” uses are, and finding a balance between efficient adoption and proper vetting.

Our taskforce acts as the collection point of all of our uses of generative AI, making sure the lessons learned from them are being routed into one location. This has led to more informed decision-making as well as better knowledge of the tools being used by our team to know which ones are adding the most value.

2. Assign a multidisciplinary team to prioritize and communicate

Research by West Monroe shows that high-performing, digitally enabled organizations are already moving away from hierarchical organizational structures. Instead, their structures are flexible and adaptable to enable collaboration. This encourages and enables individuals to work and learn across different job functions more easily, and we have really leaned into this as we create a framework that will be applied across the entire company.

Determining your path toward generative AI adoption does not just live with one department—whether that’s IT, risk & compliance, or innovation. Instead, develop a multidisciplinary team that gives every department a seat at the table to ensure that potential use cases are viewed from all angles.

Does financial automation require insight from the IT department? How do changes in marketing processes impact business development? By developing a 360° view of the pros and cons for each possible use, we are able to make (what we hope are) the right decisions in a timely manner.

Use Cases Proposed

Risks to Consider 

Marketing

Develop copy for social media and lead generation content, freeing up time for marketers to focus on more strategic tasks.

Are we obligated to tell our audience a social post was generated using AI? If yes, how does this impact West Monroe’s reputation?

Business Development

Enable quicker, more efficient research on new business prospects, which could then be incorporated into pitch decks and presentation materials.

Companies have no ownership to the output from ChatGPT. Therefore, we must be careful when claiming its content as our own—particularly because competitors may be using it too. Any of our templates with fine print about proprietary information may need to be updated.

Finance & Accounting

Automate financial analysis and reporting and identify patterns and anomalies in financial data. This can reduce errors and save time on manual tasks.

We need to be cautious about what sensitive data is being fed into publicly available AI software. This includes who owns this data once it is input, and whether or not this could potentially be accessed by a competitor or bad actor. 

Human Resources

Rethink the approach to tasks like resume screening and candidate matching. This could increase HR efficiency and also reduce bias.

While incorporating AI may reduce HR biases, we need to keep in mind that the data used to develop the AI may contain some underlying biases itself.  

You may be asking: How do we address the naysayers who see this as just another level of corporate bureaucracy slowing things down? We’ve tried to proactively avoid these criticisms by adopting a hub and spoke model for disseminating updates.

Rather than having everything funneled through just one person or even a few people at the top, we have roughly a dozen stakeholders joining a weekly call to discuss our progress and then reporting back to the areas of the company they are responsible for. This helps to ensure that our entire organization is moving in the same direction at the same time.

3. Treat generative AI with a product mindset

Similar to many product development initiatives, our approach to implementing generative AI starts with use cases and proof of concepts. West Monroe teams have been asked to identify what their most essential uses for this technology would be, including how it increases efficiency, how it impacts customer experience and where the potential pitfalls are. Then, our taskforce chooses which use cases to greenlight on a trial basis.

Once a use case has been given tentative approval, we develop workstreams to properly oversee each implementation and collect useful data and qualitative feedback. This testing and learning is critical to make informed decisions regarding what to greenlight next.

In the long term, as these proofs of concept begin to show results, we can pivot these successes into standard operating procedures and apply the uses more efficiently on more projects.

Showcasing ROI can be difficult when the best-case scenario is “nothing bad happened.” We view it as a good sign if, as we start incorporating generative AI into day-to-day processes, we do so without compromising sensitive data or receiving pushback from key stakeholders.

But this is not as helpful long-term, so we also focus on the ROI that each successful use case provides: how has client satisfaction/engagement been impacted, where have efficiencies been realized, how have costs been reduced, etc.

Conclusion: Find your own balance between risk and agility

Generative AI is just the beginning – we’re in an era where opportunities will continue to emerge for companies to embrace new, cutting-edge technology in ways that will revolutionize their work. And the speed at which any company chooses to adopt the newest tools will depend on their appetite for risk versus their desire for agility. Being first to innovate and first to develop a more efficient way of doing business is great, but is it worth the reputational risk if something goes awry?

This is a question we had to ask ourselves. And it honestly is a very heavy question to consider. What are we willing to risk in the name of being innovative? While we could wait for this public beta testing period (of sorts) to end and learn from others’ mistakes, taking a backseat approach has never been our way of doing business.

Instead, we are trying to embrace the fact that we aren’t going to get it right every time. There are likely going to be a few missteps or misplaced bets in the coming year – and that will be okay. We can only hope that the lessons we’ve learned so far will lead to strong, efficient processes moving forward.