30 January 2023

BeyondRM looks five years into the future

Is a five-year horizon too far to be looking into the future? As Risk Managers, too frequently, our focus is “short term” (today, this year, maybe next) and too often “internal”. BeyondRM, as part of our series of conversations, decided that we should look beyond, and, as usually happens at the beginning of years, make some predictions for five years from today. 

With eight participants in the conversation, there was a wide range of predictions, yet also some common themes that we discussed. Early in the conversation, it was suggested that one way to consider the future was to envision the world in 2027/28, and consider the drivers that created that world. This provided a good framing for considering each of the predictions that were made or considered. 

Some areas of prediction were quite macro in nature, while others were specific to the Risk Management profession. 

Broadly, the predictions covered: 

  1.  AI (present and future) 
  2. Flexibility and Agility of RM 
  3. US Economic Empire eclipsed 
  4. Africa
  5. Health 
  6. Reality/Virtual boundary 
  7. Inter-state Conflicts/Wars/Civil wars 
  8. Climate impacts 

 A huge range was canvased in our ninety-minute session. A consistent theme through each of the prediction areas discussed was the question of what weak signals we should be watching, and why do we keep missing them (collectively and individually). 

  1.  AI. While definitely viewed as a major risk and opportunity five years from now, it was also pointed out as a current issue. How will capabilities such as ChatGPT and other AI evolve, and how will that impact our ability to identify and manage risks, including the risk (and opportunities) associated with implementing AI capabilities across business activities? Outside the office, will AI reinforce our information echo chambers, or will it enable more diverse thinking and acceptance of alternative views on some issues? 
  2. Flexibility and Agility of RM. Effective and “successful” Risk Management functions will be significantly more flexible and agile, holistic and exploiting technology and culture to improve decision-making. ERM 3.0 was mentioned, but not defined, other than to require a more holistic approach, with ESG and the “extended enterprise” and “ecosystems” playing a key part in the evolution of effective risk management.  
  3. US Economic Empire. This subject garnered quite a bit of discussion. The global axis appears to be realigning, with three major spheres; the US, Europe, and the BRICs (Brazil, Russian, India, China, and Africa). The desire of Iran, Russia and China to find an alternative currency for oil and other major commodity transactions may contribute to a threat to American economic hegemony. The use of sanctions was cited as a driver for countries to seek mechanisms to bypass American-controlled financial systems and currency.  
  4. Africa. The growing strength and influence of Africa, specifically when in combination with a supportive China and the need for natural resources present in this area, might present itself as an opportunity in the coming years. Being first will certainly play a large role in both strategic and tactical roadmaps. While placed under this category, the coming risk of political strategic alliances will be something to be followed closely. Similar to the EU, there is potential for other areas to join forces to avoid being squeezed out, and global companies will need to play carefully if they are present in each territory. 
  5. Health. One prediction was that in five years' time, we may see medical and health technologies that extend lifespans significantly. While this may or may not happen, a counterargument was presented that in five years' time, global life expectancies may actually fall, due to climate, food security, and further pandemics. Global poverty may be exacerbated by political instability and climate. While there may be great progress, there was caution that the benefits may only be available to the very wealthy. 
  6. Reality/Virtual boundary. What information do we trust, and how can we separate “truth” from “falsehood” in information and data? What is “real”? While in part related to AI, there was more concern about the sanctity of information. What is real, what is virtual, what is “deep fake”? Will our workers and co-workers be “real”, and will our customers interact with real humans? What happens when the virtual customer “talks” to the virtual customer services rep? Is this really a five-year issue, or will we be dealing with this in 2023? 
  7. Inter-state Conflicts/Wars/Civil wars. The “War to end all wars” ended 104 years ago, and there has been no let-up in the number of conflicts. This past year reminded us that war remains possible, indeed probable, and that it is our disbelief that leaders will resort to violence that stops us from projecting wars or conflicts, even when the evidence of impending conflict is clear to see. With this in mind, the group discusses what other wars may be possible, even probably, in the coming five years. No overt predictions were made, but potential conflicts in the Gulf region, Pakistan/India, and of course, China/Taiwan were discussed. One interesting observation was that the Second American Civil War may already have started. 
  8. Climate impacts. While we can see climate change impacts around us already, the group was concerned that the impact will increase, with ramifications across all industries, but with a particular impact on insurance and primary production industry groups. Progress on renewables is impressive, but it will still be many years before fossil fuels are meaningfully displaced, and with them, the ongoing damage to the environment.  

 All our conversations are held under the Chatham House Rule, and as such, we are happy to share our thoughts openly and frankly.

LinkedIn article

09 January 2023

Wasted and Useful Lives, or why I fear AI

AI scares me. Not because it will result in “Skynet", though that is a distinct possibility. My real fear is the dumbing down of humanity. Sure, plenty of humanity is already there, but various cultures value education and knowledge, and we perceive those cultures as being full of naturally smart people. We also know that as female literacy and education levels increase, birth rates decrease, reducing the strain on limited natural resources.  

In 1895, H.G. Wells showed us a future hundreds of thousands of years from now. Little did he know that his envisioned future may be mere years or decades away. Or will we have Eloi and Morlocks? What I fear from AI is the dumbing down of humanity. The coming generations of sheep, grazing on the green, green grass of an AI-managed farm.  

 

Societies, like militaries, need a good “NCO Corps”, the good junior to middle managers who are becoming “experts” and who will, in the fullness of time, be those that will provide and contribute to effective decision-making. How do we create and nurture the experts? By having cohorts of junior and mid-level professionals (Morlocks?) engaged in constant learning.  

 

I remember sitting at lunch with two senior partners of an accounting firm. They were lamenting how “the kids of today”, referring to the young accountants in the firm, just didn't have the same work ethic or capabilities as their generation. I asked them how many of their generation were still with the firm, or even still in the profession. After a moment of silence, I said to them, “Your generation was no different from this one. Only the two of you, and maybe a handful of others from your early years, are still with the firm, or even still in the accounting industry.” 

 

My point was that an intake of fifty young accountants (or risk managers, contact centre professionals, human resource specialists, or any other profession) will become thirty mid-level professionals, and eventually will narrow down to five or so real experts. This is natural. Not everyone becomes an expert, and not everyone stays in the same profession, even more so today. 

 

So how does this relate to AI, and why it scares me? 

 

Simply put, as AI displaces jobs, it will not be displacing (initially) the experts. It will be displacing the junior-level workers, the very ones who need to go through years of learning through doing. Remove, or dramatically shrink, the input side of the path to professional excellence, and there will be fewer potential future experts.  

 

Now push AI even further back in the “intellectual supply chain”, and not only will there be fewer jobs to create future experts, but there will also be fewer individuals capable of even entering the supply chain. For example, AI today will write students’ essays for them. Yet, the purpose of the essay or report is not to produce it, but to grow the thinking processes that enable students to question and formulate coherent thoughts.  

 

So, with essays and term papers being generated by an AI system, used by a student who knows how to use Google but cannot rationally consider the veracity of someone's “own research” via Youtube, I envision a world in which the education system(s) produce people who cannot think, but whose papers look and sound well thought out. 

 

And then, will we require AI engines to identify the AI-generated term papers and assignments turned in by internet-proficient (but otherwise unskilled and uneducated) people. 

 

Some will point out that when the lightbulb was invented, whale hunting for lamp oil was a profession that disappeared. Automobiles displaced horses. Our mobile phones have displaced complete families of appliances. Automation has put manual factory labour out of work, and the move away from coal-fired power has destroyed (almost) the coal mining sector. 


And in each of those cases, people and societies have adapted. And society will adapt to the ubiquity of AI. New skills were required, and new professions opened up; many starved for skilled labour until enough people took the plunge and learned new trades.  

 

That process has continued throughout the past couple of centuries, and we take it as the norm for social and economic development. 

 

Today, AI is core to enabling the delivery of billions of packages, all scheduled to arrive at particular times and across millions of locations, without AI. AI is finding new ways to identify cancer and other medical conditions, faster and with fewer errors. AI is improving our lives; there is no doubt about that. This revolution is both performing jobs that could not otherwise be accomplished, and displacing jobs.


But what happens when the “new jobs” are filled by machines (not robots, though those will certainly have a continuing impact) and there are no “new jobs” for people to retain and reskill into? What happens when those professions that require a refresh of the expert level, cannot find the experienced middle and junior people who will grow into those jobs? 

 

I suspect that we will need to move confirmation of students' ability from the essay to the exam. And yet, there certainly will be, and already is, objection to the use of exams to measure student advancement. 

 

Or, should we embrace a future of leisure, relying on a “universal income” that will be plenty for anyone other than the strivers and the wannabe billionaires? The good news is that there will always be those people, who will do anything for the trappings that show their superiority (real or imagined). 

 

There will always be artists, poets, authors, painters, and those that like to walk in the fields and woods. Freeing people through automation was a dream, and freeing them through AI is a continuation of that dream. We also need to value the people who will want to sit and talk to each other, and those that will not want to do that.  

 

In H.G. Wells's 1895 novel The Time Machine, two races remain from the human race; the Eloi and the Morlock. The Eloi are the poets, the writer and artists, those walking through the forests and sitting by the sea. The Morlocks are the residual professionals and individuals striving for a “better life”. Yet in his novel, the Morlocks are the “evil”, and the Eloi are the “good". Or are they? 

 

Will AI further a dumbed-down race of humans, unable to make the choice of which life they want? Or will AI truly free humanity from toil and let us choose our own definitions? Remove the requirement to work simply to live, and we can each choose a “wasted” or “useful” life. But, But, the real question will remain, which is which? 



Thank you, Kliban, for the fine cartoon. I notice he does not say which is which. You choose.


NB: I use Grammarly, not quite AI, to radically improve my otherwise horrendous spelling and sometimes grammar. I'm not above a bit of automated assistance.