THE ACCOUNT
The latest in finance and business
Cop out?
Leaders of some of the world’s largest banks and investment funds have said they will not be attending Cop29 this month, according to the Financial Times. The climate change summit, which this year is being held in Baku, Azerbaijan, takes place a week after the US election, and there are inevitably fears among some that a Donald Trump victory might lead to big financiers deprioritising sustainability. Trump has threatened to withdraw the US from the Paris Agreement if he wins.
Most of those not attending, such as leaders of BlackRock, Bank of America and Deutsche Bank, insist they will be represented, albeit not by their most senior leaders. There have also been criticisms that last year’s Cop28 in Dubai was too big and a degree of downsizing was in order.
Whichever the reason for their non-appearance, one profession that has reaffirmed its commitment to tackling climate change is accounting. The Global Accounting Alliance, which includes ICAS, has issued a new report on biodiversity and the net-zero transition.
The report’s introduction concludes: “While the landscape for delivering a net zero, nature positive and socially just transition is complex, and characterised by myriad players and numerous regulatory efforts, these initiatives must be translated into action. The GAA – and its member bodies – are committed to playing their part.”
You can read the October 2024 report in full here.
Etiquette test
Different generations have always brought different approaches to the workplace. And it’s not unusual for there to be a culture clash when that happens, with more experienced workers in particular taking exception to younger ones. So far, so familiar.
One recent survey of employers in the US, however, has sparked much debate about Gen Z’s (broadly those born 1997–2012) relationship with work. Intelligent.com asked nearly 1,000 business leaders involved in hiring decisions about their experience with Gen Z college graduates.
Here are some of the key findings…
• 75% of companies say some or all of the recent college graduates they hired this year were unsatisfactory, with 60% firing at least one
• 1 in 6 hiring managers say they are hesitant to hire from this cohort
• 1 in 7 companies may refrain from hiring recent college graduates next year
• 9 in 10 hiring managers say recent college graduates should undergo etiquette training
Some of the findings are unremarkable, and others might be attributed, at least in part, to personal prejudice. But that final figure is quite striking. Have two years of intermittent lockdowns and new working habits denied some young people crucial socialising time and left a lasting impact on their behaviour? And if so, what could employers do to mitigate that?
Studies repeatedly show that hybrid working leads to a happier workforce. But one consequence could be that colleagues spending less time in each other’s company will have to a work a little bit harder at rubbing along together.
CAs in the news
Craig Read CA
Craig Read CA, who qualified in 2008 after training with RSM, has been appointed Partner at Armstrong Watson. “Armstrong Watson has been on a real growth trajectory the past few years in Scotland and is going from strength to strength,” said Read. “I look forward to adding value to businesses in the Scottish market and the firm as we continue on our growth journey.”
Ian Giffen CA
Congratulations to Ian Giffen CA, who recently received the prestigious Institute of Corporate Directors’ Fellowship Award. Giffen moved to Canada in the 1980s, since when he has largely worked in tech. He has served on the boards of more than 25 public and private companies, including computer graphics company Macromedia, where he spent eight years as a director. Macromedia was sold to Adobe in 2005 for $3.4bn (£2.6bn).
Professor Mark Allison CA
The Royal College of Physicians and Surgeons of Glasgow has conferred Professor Mark Allison CA as an Honorary Fellow for his outstanding contribution to accounting education. Professor Allison chaired the college’s Finance, Audit and Risk Committee in 2014 and later became its first ever lay trustee.
Road trip
PwC Glasgow is on the move, albeit a very short one, from 141 Bothwell Street to 120 Bothwell Street. But it is the choice of building that is the story here.
The Aurora Building, which sits at 120, is known as Glasgow’s most sustainable edifice. Having retained the façade and structure of the original building during refurbishment, Aurora promises a net-zero operation and “excellent” Breeam rating.
“Investment in cities like Glasgow is key to our regional strategy, which is a fundamental part of our purpose and our success,” said Carl Sizer, Head of Markets at PwC UK. “It sees us leading the charge in redressing skills and productivity gaps across the UK, in order to drive long-term and sustainable growth – and aligns with our sustainability focus in Scotland which, as a region, is most impacted by the energy transition.”
Reading the riot act
In a survey for ICAS’ Shaping the Profession project, unveiled at October’s Beyond the Numbers event, 80% of respondents expressed little or no confidence in the ability of regulation to respond to emerging tech such as AI, blockchain and crypto. And literally no one ticked the box marked “very confident”. Or, in the language of Eurovision, that’s tech regulator – nul points.
Regulation is a subject that exercises the minds at ICAS, where there is a clear understanding that a balance needs to be struck. Too much regulation risks suffocating growth and innovation. Not enough and… well, most of us are old enough to remember how dire the consequences can be. The 2008 crash was incubated in a market that lacked sufficient scrutiny, and we’re still dealing with its fallout 16 years later.
Right now, AI is operating without regulation. It is a confusing and complex subject, not least because, depending on who you listen to, the technology is either a beneficent gamechanger, an overhyped money drain or an existential threat.
But amid all the noise, one story recently emerged that raises some serious questions about where this is heading and whether regulation needs to come into effect as soon as possible.
OpenAI produces a “risk factor” whenever it releases a new version of ChatGPT, based on cybersecurity, biological threat, persuasion and model autonomy – and the latest version, known as Strawberry, has already hit “medium risk”.
The company reached that conclusion largely based on the view that it could become more persuasive than a human. Evaluators said Strawberry could manipulate data to “make its misaligned [with human values] action look more aligned”.
It also means that OpenAI has already gone about as far as it safely can. The next step above that medium-risk rating is “high”. And it’s safe to assume we never want to discover what “critical” looks like – although the revelation that Strawberry could also help skilled scientists produce nuclear and chemical weapons faster and more effectively may offer a clue.
AI differs from every other technology in that it is designed to evolve and improve by itself. Couple that with the need for big tech companies to find ways to turn a profit from it because of the billions being invested, and you can see why we may be storing up trouble for the fairly near future.
One recent report questioned whether OpenAI would simply adjust that risk to suit its own ends. After all, the company was set up as a non-profit, only to create its own for-profit arm.
Which comes back to our original point about regulation. AI could achieve so much for the greater good of humankind, but without proportionate regulation, it could be allowed to run riot, much like social media. And look how that has turned out.
Ryan Herman
Reading the riot act
In a survey for ICAS’ Shaping the Profession project, unveiled at October’s Beyond the Numbers event, 80% of respondents expressed little or no confidence in the ability of regulation to respond to emerging tech such as AI, blockchain and crypto. And literally no one ticked the box marked “very confident”. Or, in the language of Eurovision, that’s tech regulator – nul points.
Regulation is a subject that exercises the minds at ICAS, where there is a clear understanding that a balance needs to be struck. Too much regulation risks suffocating growth and innovation. Not enough and… well, most of us are old enough to remember how dire the consequences can be. The 2008 crash was incubated in a market that lacked sufficient scrutiny, and we’re still dealing with its fallout 16 years later.
Right now, AI is operating without regulation. It is a confusing and complex subject, not least because, depending on who you listen to, the technology is either a beneficent gamechanger, an overhyped money drain or an existential threat.
But amid all the noise, one story recently emerged that raises some serious questions about where this is heading and whether regulation needs to come into effect as soon as possible.
OpenAI produces a “risk factor” whenever it releases a new version of ChatGPT, based on cybersecurity, biological threat, persuasion and model autonomy – and the latest version, known as Strawberry, has already hit “medium risk”.
The company reached that conclusion largely based on the view that it could become more persuasive than a human. Evaluators said Strawberry could manipulate data to “make its misaligned [with human values] action look more aligned”.
It also means that OpenAI has already gone about as far as it safely can. The next step above that medium-risk rating is “high”. And it’s safe to assume we never want to discover what “critical” looks like – although the revelation that Strawberry could also help skilled scientists produce nuclear and chemical weapons faster and more effectively may offer a clue.
AI differs from every other technology in that it is designed to evolve and improve by itself. Coupled that with the need for big tech companies to find ways to turn a profit from it because of the billions being invested, and you can see why we may be storing up trouble for the fairly near future.
One recent report questioned whether OpenAI would simply adjust that risk to suit its own ends. After all, the company was set up as a non-profit, only to create its own for-profit arm.
Which comes back to our original point about regulation. AI could achieve so much for the greater good of humankind, but without proportionate regulation, it could be allowed to run riot, much like social media. And look how that has turned out.
Ryan Herman