As a risk practitioner, have you ever tried to describe what you do for a living to a family member or a friend? If so, you’ve likely experienced their acquiescent and politely confused reaction as you articulate concepts like risk assessments, controls, tests, tolerance, appetite, key risk indicators, governance and a host of other tactics that are commonly executed as part of a practitioner’s day-to-day responsibilities. At the conclusion of your pride-filled intellectual description, you feel like you did a great job explaining what you do, when your conversational partner replies with, “Wow, that sounds awesome! So, what do you actually do?” Uncertain about how to respond, you begin to retrace your words only to realize that internally, you are asking yourself that very same question, combined now with an unclear perspective about your professional identity. You ponder, “What DO I do, and, who am I as a professional?”
You may not know it, but artificial intelligence (AI) has already touched you in some meaningful way. Whether approving a loan, moving your resume along in the hiring process, or suggesting items for your online shopping chart, AI touches all of us – and in some cases, with much more serious consequences than just putting another item in your chart.
As this technology becomes more widespread, we are discovering that maybe it’s more human than we would like. AI algorithms have been found to have racial bias when used to make decisions about the allocation of health care, criminal sentencing and policing. In its speed and efficiency, AI has amplified and put a spotlight on the human biases that have been woven into and become part of the Black Box. For a deeper dive into AI and racial bias, read the books, Automating Inequality, Weapons of Math Destruction, and Algorithms of Oppression: How Search Engines Reinforce Racism.
Artificial intelligence (AI) and machine learning are common terms in the world of emerging technology. Although still sounding futuristic to some people, AI is already being deployed everywhere from fantasy football weekly recap emails, to retail environments, to advanced, state-sponsored surveillance systems. In ISACA’s Next Decade of Tech: Envisioning the 2020s research, a recent survey of more than 5,000 global technology professionals, 38% of respondents expect AI and machine learning to be the most important enterprise technology of the next decade – more than cloud platforms (22%), big data (16%) and even blockchain (8%). Ballooning costs, labor shortages, poor service quality, strong public interest, and recent market shifts forcing the enhanced availability of electronic records are strong indicators that few industries will experience the impact of AI more than healthcare.
The rapidly increasing pace of technology change and digital disruption leads to an unprecedented pace at which organizations must address opportunities and risks that could make or break their success. In the new decade of the 2020s, technology-driven exponential change will accelerate even more sharply. Unfortunately, most organizations are ill-prepared for what is to come, and will remain so unless they replace their reactionary approach to the technology landscape with an anticipatory one.
Reactionary strategies are reliant on attempting to become more agile and react quickly after a disruption or problem occurs – perhaps an unforeseen risk related to deploying a new technology or a competitor’s new product that is suddenly commanding market share. While the ability to muster an agile response is an important competency for organizations to possess, the organizations that will succeed in the 2020s and beyond will be the ones that become anticipatory, using hard trends (based on future facts) to identify disruptions before they disrupt and to pre-solve predictable problems.
The increasing reliance on big data and the interconnection of devices through the Internet of Things (IoT) has created a broader scope for hackers to exploit. Now both small and large businesses have an even wider surface to work on protecting. Yet, all it takes is one new trick for an attacker to penetrate even the most sophisticated firewalls in a matter of seconds. The good news is that while, on the one hand, increased reliance on big data puts businesses at risk of cyberattacks, if used well, the same data can be used to enhance cybersecurity.
How Big Data Is Helping CybersecurityWe are so used to the idea of protecting data that using it to bolster cybersecurity might not be top of mind. However, it's not only sensible, but also incredibly effective. According to the results of a study conducted by Bowie University, 84% of businesses using big data successfully managed to block cyber-attacks. What was their secret? Three words: big data analytics.
This blog is intended to offer a way for ISACA leaders, constituents and staff to exchange information of interest pertinent to the association, the business environment and/or the profession.
The comments on this site are the author’s own and do not necessarily represent ISACA’s opinions or plans. ISACA does not endorse, monitor or control any links to external sites offered in this blog, and makes no warranty or statement regarding the content on those external sites.
Anyone posting comments on this site should ensure that the content remains on-topic and steers well clear of any statements that could be considered insensitive, offensive or threatening. Given ISACA’s global nature, the need to communicate in a way that is accessible and acceptable to many cultures should be taken into account. ISACA retains the right, at its sole discretion, to refuse content that is considered inappropriate.
To volunteer to write a blog or suggest a topic send an email here.