The Role of Data Strategy in Optimizing Organizational Processes

Rajul KambliThe relevance of data cannot be over emphasized in today’s world, where change is the only constant. Decisions that managers and executives tend to make emanate from the availability of data analysis. While the turnaround time to collect the data, analyze, interpret and act has shrunk significantly, those who are able to do this in not only shortest possible time but also effectively and efficiently enjoy the first-mover advantages.

Strong data strategies must account for the following:

  • Prerequisites of data—Integrity of data is must, because actions of organizations are based on representative data collected and analyzed. Insight of data with the key elements of reliability, consistency and timeliness make these data a fit foundation for long-term sustainability and appropriate actions.
  • The concept of master and transactional data—Any attribute of data is broadly classified into master or transactional data. This basic classification drives further strategies of data, on which pivotal decisions of data centralization and data sharing are heavily dependent.
  • Integration of business intelligence and market intelligence—A representative yardstick of corporate objectives are based on business intelligence. A correlation of these metrics to industry data through market intelligence is vital to be in sync with industry outlook. This integration reflects not only how realistic the corporate objectives are, but it also asks if corporate objectives align with industry outlook, and more importantly to what extent they are practical and achievable.
  • Data use—How do different business use data to understanding buyer behavior and preferences?

Timely and correct data analysis is a universal requirement. Consider the medical profession, in which a prescription of a medicine by a doctor depends on the report of a patient. The sooner the diagnosis, the sooner the remedy can be administered. But in addition to time, the accuracy of reports is vital. Similarly in sports, data related to the top players' strengths are used to determine the game plan.

From many transformation projects that I have been part of, one thing is conclusive: One size does not fit all. Those who are able to strike a balance between internal data quality, integrity and timeliness on one hand and optimizing data centralization, data sharing and automation on the other could make themselves adaptable to the changing needs of organizations more effectively.

Read Rajul Kambli's recent Journal article:

"Value Creation Through Effective Data Strategy," ISACA Journal, volume 5, 2019.

The Role of Ethics in Risk Management

Most people are aware of and talking about risk management. However, barring a handful of high-profile and sophisticated IT organizations, for most enterprises, it is more talk vs. the actual implementation of risk management practices. It is a no-brainer that everything in IT should have active risk management practice embedded into it. When done correctly, it ensures service quality and lowers the risk of outages. While authoring my recent ISACA Journal article, “Rethinking Risk: A New Ethics of Enterprise IT,” I conducted an Internet search of “Ethics in IT” to see if it is an issue and to learn whether ethics issues in IT are reported. I only got a few hits and realized that it appears that ethical behavior in IT is neither measured nor reported, except that the “people” factor kept popping up, especially in terms such as “people are our most important asset” and “our people innovate and are best.” However, in my opinion, people are unpredictable and susceptible to political-management pressures, and us-vs.-them and an I/we-have-the-best-solution mind-sets. All these factors do not go well with the overall purpose of IT and are detrimental to our dependency on IT services, which are embedded into our lives. Therefore, there is a need for ethical behavior of IT professionals, and it should be part of overall governance and risk management practices. Also, in my personal observation, people follow processes out of fear or fear of non-compliance, and there might be an opportunity for them to believe in the process or control vs. seeing it as a nuisance.

Depending on which industry one is in, a service issue can be as catastrophic as loss of business to loss of lives. Whenever a catastrophic  event occurs, organization go through lessons learned and perhaps find a technology fix but rarely ever fix behavior.

It would be beneficial if management/consultants/auditors started observing trends in behavior. In my opinion, the only way this can happen is by having an unbiased view of how things are being done. This unbiased view should be insulated from departmental politics and management/executive pressure. I would encourage open dialogue when it comes to ethical behavior risk to processes such as change management, incident management, problem management and architectural-design decisions, not to mention my favorite, bending to vendor/technology pressures. I know this is easier said than done unless management is willing to change itself—hence this process must start at the Risk IT principle “Establish Tone at the Top and Accountability."

Read Rajesh Srivastava's recent Journal article:

Rethinking Risk: A New Ethics of Enterprise IT," ISACA Journal, volume 4, 2019.

Measuring Risk Quantitatively

Quantitative risk has become a growing field of interest for information security professionals. This is good news, as I strongly believe that this is the right approach to perform meaningful information risk assessments.

The first time I discovered quantitative risk was by picking up a book in the library called The Failure of Risk Management.1 The book validated my concerns over the classical approach to risk management for information security that used qualitative indicators such as high, medium and low. As a practitioner of information risk management, I could not hide my disappointment amongst my peers and was really hopeful there might be a better way.

After reading Hubbard’s book, I obtained a master's in information risk, in which I had an enlightening course called quantitative risk analysis. I then decided to bring some quantitative risk concepts to my organization and perform a pilot risk assessment comparing the outcome of both a qualitative and quantitative assessment on the same business application.

The outcome of this pilot risk assessment was shared amongst my peers within my organization, as explained in my ISACA Journal article. A lot of interest was given in my organization from the key stakeholders: business application owners, IT application owners and the chief information security officer (CISO). My model deliberately took a simple probabilistic approach rather than a more advanced one as praised by quantitative experts, as I did not have the necessary time to delve into the realm of probabilistic analysis.

I still feel very passionate about the need to further develop quantitative risk analysis for information security. Quantitative analysis is already used extensively in other fields such as finance, healthcare and insurance, so there is no reason why the same approach cannot be applied to information security.

Read Benoit Heynderickx's recent Journal article:

"Evolving From Qualitative to Quantitative Risk Assessment: A Practitioner’s Dilemma," ISACA Journal, volume 4, 2019.

1 Hubbard, D. W.; The Failure of Risk Management: Why It’s Broken and How to Fix It, Wiley, USA, 2009

The Role of Incident Management in Identifying Gaps During Stabilization Period

Rajul KambliDeploying an enterprise resource planning (ERP) system is challenging, and identifying gaps that could lead to risk is one of the most important aspects of stabilization. In my recent ISACA Journal article, I discuss how we can optimize incident management and use it to identify such gaps and risk factors at an early stage to take corrective action.

Here are some key points that any enterprise should consider during the stabilization period:

  • Channel for end users to report issues—A robust process for end users to log issues would generate comfort and provide confidence that issues are routed to the right contacts for timely resolution.
  • Structure of incident management—Ease of logging issues, timely triaging the incidents to the right teams and assigning a level of priority are the fundamentals of a good incident management process.
  • Grading of incidents—The number of incidents that may be encountered could be high, hence, a mechanism to grade and accord priority would optimize resources that are assigned to deliver resolution.
  • Review of incidents—Monitoring of number of incidents and the analysis of such incidents could reveal critical design gaps that could have a long-term impact on an organization’s process, and it could reveal governance issues.

In many of the deployment projects that I have been part of, incident management has not only aided in identifying gaps for early resolution, but also provided a mechanism to avoid a potential control and governance issue at a later date.
 
Read Rajul Kambli’s recent Journal article:
Incident Management for ERP Projects,” ISACA Journal, volume 3, 2019.

Simplifying Enterprise Risk Analysis

How many enterprise risk analysis reports must an organization release? A few years ago, I faced this question in light of cost, time and complexity of the solution. My conclusion is that 1 is fine.

Cost is a consequence of the details I need, the number of people involved and their time. Complexity can come from the need for training sessions (and increased costs). A lot of time spent on refreshing basic information means it is updated less frequently, and the obsolescence will decrease the quality of the results.

I want to propose a methodology to assess the risk based on 2 levels of evaluations in order to cover any need for details, to cut any redundancy in data collection, to provide simplicity in the assessment, to keep a low time to update, and to ensure great flexibility to add and maintain any new control framework with minimal cost.

It sounds complex, but it is easy enough to do. In practice, risk is the calculation of uncertainty on the achievement of the business objectives. If we connect uncertainty about objectives to the level of maturity to enforce the rules, then we can involve in the assessment all the key users, but the evaluation can be limited to their work and therefore no training will be required. Complementing this, an organization can also use a light and flexible software tool.

With this proposed methodology, we will get several types of risk analysis and related documents. We will have all the risk analysis of International Organization for Standardization (ISO) certifications, the data protection impact assessment (DPIA) of the EU General Data Protection Regulation (GDPR) 2016/679, the business impact analysis (BIA document), the risk treatment (RTP) plan, IT security assessments, the level of compliance with the laws, etc. This methodology provides this information all in a single tool, but it is managed by key users (to feed and analyze) and top management (to make decisions and approve) in a continuous and virtuous loop, each in its own set of competence.

How to do this is explained in my 2-part Journal article.

Read Luigi Sbriz’s recent Journal article:
Enterprise Risk Monitoring Methodology, Part 1,” ISACA Journal, volume 2, 2019.

What Are Challenges in Deployment and How Can They Be Mitigated?

Rajul KambliTransformation offers many key benefits, and any enterprise that would like to sustain and grow in this ever-changing, fast-paced world would be subject to the deployment of new systems. In my recent ISACA Journal article, I discuss various challenges that any enterprise might experience and how the intensity of any of those challenges would differ based on organizational dynamics and economic variables.

Here are some key points that any enterprise should consider in the deployment process:

  1. Getting the right people—Human capital is key to any endeavor’s success. Hence, considering human resources as a dispensable commodity could be fatal. Therefore, it is imperative that the leaders spend time and effort not only to get the right fit but also to retain talent to see a successful deployment of the project. People who have spent enough time in the business, if they are part of such projects, could add significant value and foresight.
  2. Selecting the right fit—Compatibility is the key for long-term sustainability of any partnership. Hence, selecting a vendor (business partner) that would be a key stakeholder in success is very important.
  3. Defining the scope—Being optimistic is important, but being pragmatic is vital. As the famous saying goes, “do not bite off more than you can chew.” It is critical that the scope of a deployment is defined, analyzed, validated and communicated well before taking the first step.
  4. Communication is key—Communication to those who will be affected by change is important since it has a direct correlation with change management and the deployment’s outcome. Most change management issues emanate not from the system but from people who do not embrace the change.
  5. Post-go-live support and health check—Similar to physiotherapy for steady recovery, appropriate post-go-live support and the right metrics to fathom stability and consistency are major indicators to reflect any necessary and timely action.

In all the transitions and deployment projects that I have been associated with, proactive steps on these considerations have been the recipe not only for success but also for continuous improvement.

Read Rajul Kambli’s recent Journal article:
Identifying Challenges and Mitigating Risk During Deployment,” ISACA Journal, volume 6, 2018.

Key Steps in a Risk Management Metrics Program

Performance evaluation of an organization’s risk management system ensures that the risk management process remains continually relevant to the organization’s business strategies and objectives. Organizations should adopt a risk metrics program to formally carry out performance evaluation. An effective risk metrics program helps in setting risk management goals (also known as benchmarks), identifying weaknesses, determining trends to better utilize resources and determining progress against the benchmarks.

My recent ISACA Journal article:

  • Discusses the need for linking key risk indicators (KRIs) to key performance indicators (KPIs), and how it helps in getting buy-in of business managers in risk management initiatives
  • Highlights how a risk metrics program can be used to integrate KRIs and KPIs for effective technology risk management
  • Leverages use of the 3-lines-of-defense model as a primary means to structure the roles and responsibilities for risk-related decision-making and control to achieve effective risk governance, management and assurance; and distribution of KRIs among the 3-lines-of-defense
  • Discusses the role of governance, risk and compliance (GRC) tools in automating the risk metrics program and provides an overview of risk metrics automation workflow in a typical GRC solution

Practical Guidance

The key steps in the risk management metrics program are:

  • Select metrics based on the current maturity level of risk management and information security practices in your organization.
  • Develop the selected metrics by capturing all their relevant details in a predefined template (called the Metrics Master) to guide metrics collection, analysis and reporting activities. Consider covering such details as objective of the metric, entry criteria giving the prerequisite for implementing the metric, tasks involved, formula to calculate the metric value, the target value set for the metric, verification and validation, and exit criteria. A suggested template with a sample entry is provided in figure 1.
  • Implement the metrics and capture the evidence of implementation in a register (called the Metrics Data Register), and transfer the relevant data values to a pre-defined template (called the Metrics Calculation Register) to facilitate computation of metrics values.
  • Analyze the computed metrics values, evaluate the trends, identify the areas for process and control improvement, and draft an action plan for continuous improvement of information security and the risk management posture of your organization.
  • Report the risk management and information security trends as indicated by the metrics to the risk manager/information security manager who would review the trends and communicate further, if required, to various stakeholders.

Start the metrics program with a small number (e.g., 6) metrics and add new metrics progressively as the risk management and information security maturity of your organization improves.

Author’s Note
Rama Lingeswara Satyanarayana Tammineedi is currently working with Tata Consultancy Services.

Read Rama Lingeswara Satyanarayana Tammineedi’s recent Journal article:
Integrating KRIs and KPIs for Effective Technology Risk Management,” ISACA Journal, volume 4, 2018.

The Benefits and Risk of Blockchain Technology

Phil ZongoBlockchain technology, which rose to prominence in 2008 with the publication of the fascinating white paper Bitcoin: A Peer-to-Peer Electronic Cash System, is widely predicted to drastically transform several sectors. For instance, blockchain-based smart contracts are anticipated to facilitate the direct, transparent and irreversible transfer of funds from donors to those in dire need, eliminating needless intermediary costs and cutting global poverty. The healthcare sector also fits the bill perfectly for blockchain implementation. Through its core virtue of decentralized architecture, blockchain could supplant archaic, fragmented and heterogenous healthcare systems—boosting the quality of patient care and lowering healthcare delivery costs. Potential blockchain use cases are as wide-ranging as the enterprises trying them.

At the same time, for all the potential of blockchain, the technology is also rife with fresh and complex business risk. My recent article explores in-depth 3 fundamental challenges business leaders should carefully consider to maximise Blockchain’s potential.

Patchy Regulatory Frameworks
Until recently, there have been very few global laws to govern digital currencies and initial coin offerings (ICOs). Regulators are starting to act, but the responses are still disjointed and sporadic. Countries such as China and Hong Kong have outlawed ICOs. Meanwhile, countries such as Australia, Switzerland and the United States have issued guidelines articulating circumstances under which an ICO is deemed a security. The Central Bank of Nigeria, on the other hand, distanced itself from Bitcoin regulation, stating that it has no intention to regulate blockchain just as it has no intention to regulate the Internet.

Inevitably, these regulatory loopholes have lured counterfeiters and Ponzi schemers. Through promises of extraordinary returns, predatory enterprises are ensnaring unwitting investors, and then vanish after closing the purported ICO. Furthermore, as the German Federal Financial Supervisory Authority, rightfully warned, “Typically, projects financed using ICOs are still in their very early, in most cases experimental, stages and therefore their performance and business models have never been tested.”

Kicking the proverbial can down the road or assuming the cryptocurrency industry will proactively self-police would be naive and turning a blind eye to the original intentions of cryptocurrency inventors. Regulators could, for example, take a cue from Canada’s Autorite des marches financiers (AMF), which extended its regulatory sandbox to ICOs, providing an important window to acquaint with ICO risk without stifling this technology. In addition, regulators should prohibit pension funds and other pools of public assets from investing in the volatile and uncertain cryptocurrencies or ICOs.

Cybersecurity and Vulnerabilities
Since inception, blockchains have been widely touted as “well-protected, reliable and immutable.” These supposed virtues have considerable merit—blockchain uses asymmetric keys to encrypt and decrypt content, thus ensuring high levels of authentication and nonrepudiation. But if we zoom into each high-profile cryptocurrency heist, we can easily conclude that blockchain is rife with security flaws. Hackers continue to exploit common issues such as lack of multi-signature, low-security hot wallets; poor input validation; insider threat issues; and a host of common defects to steal billions. Business leaders should thus carefully consider the security implications of each blockchain technology and ensure a minimum set of non-negotiable controls are baked into projects from the inception.

Impediments to Transformational Change
As with any other disruptive trend, the rise of blockchain reignites the dynamic interplay between continuity and change. For instance, blockchain renders a wide array of existing decentralized applications obsolete, most of which have operated steadily over many years and still underpin strategic revenue lines. Furthermore, an enterprise’s culture—“elements of social behaviour and meaning that are stable and strongly resist change”1—can also present significant inertia to blockchain implementations as employees resist change and stick to their old ways of working. Maneuvering past these constant dualities thus requires careful balance between innovation and business stability: None of these can be dealt with in isolation.

Read Phil Zongo’s recent Journal article:
The Promises and Jeopardies of Blockchain Technology,” ISACA Journal, volume 4, 2018.

1 Rumelt, P.R.; Good Strategy/Bad Strategy: The Difference and Why It Matters, Profile Books, United Kingdom, 2011 

Love Them or Loathe Them, Good IT Business Cases Are of Inestimable Value to Good IT Portfolio Managers

Many struggle to pull credible business cases together. Business case mechanics aside, the hard work not only involves identifying the required data, collecting them and ensuring that they are of the right quality, it also involves receiving buy-in for the business case from stakeholders, hopefully without too much fudging. That business cases can be fudged highlights the importance of an explicit assumptions section; it is a vital component of a good business case because it can be used to assess the veracity of the business case’s inputs.

In spite of how hard building a business case can be though, properly assessing the contribution of new IT investments to the organization helps prevent wasting precious organizational resources on “investments” that yield little for the organization. A good business case also helps ensure a good understanding of the dependencies of the project on various organizational resources, all of which helps ensure the business success of the IT investment.

Furthermore, an IT business case is a key part of good IT governance, and good IT governance facilitates good corporate governance. Ultimately, corporate governance ensures that IT innovation—as a particular subset of IT investment—is suitably focused on the organization’s strategy and that it is appropriately resourced to fulfil its various promises.

One of the promises of IT innovation is high returns. Those in the investment community, however, know that high returns come at a cost: higher risk. Indeed, part of the reason many corporate IT innovations fail is because this risk is not identified, thereby compromising the innovation’s key promise: to advance the organization.

Interestingly, while IT innovation may be obvious in some organizations, in others, IT innovation is often relative. For example, in an organization still running on spreadsheets, the evolution to a database may be considered innovative. In most large organizations, there is a portfolio of IT investments that can be considered innovative, at least in their terms.

Given the previously mentioned riskiness, identifying innovative IT is key. In large organizations, categorizing it is something else. Usefully, investment-grade business cases communicate 2 things about a prospective IT innovation. First, it communicates its expected financial returns, and second, it quantifies the expected variability of those returns (riskiness). Armed with these 2 parameters, it becomes easy to identify the IT investments that are innovative in the context of the organization’s risk appetite. For these investments, actively managing the identified risk is a critical success factor.

My recent Journal article “The Power of IT Investment Risk Quantification and Visualization: IT Portfolio Management” expands on all this, and sheds new light on IT portfolio management as a tool for managing different parts of the IT portfolio for maximum organizational impact.     

Read Guy Pearce’s recent Journal article:
The Power of IT Investment Risk Quantification and Visualization: IT Portfolio Management,” ISACA Journal, volume 4, 2018.

Performing Cyberinsurance “CPR”

Indrajit AtluriCyberinsurance and data privacy will garner more focus for the remainder of 2018 and beyond. The impending “Equifax effect,” which most of us anticipated, was put forth in late February 2018 by the US Securities and Exchange Commission (SEC) in the form of guidance that states that public companies should inform investors about cybersecurity risk even if they have never succumbed to a cyberattack. The guidance also emphasizes that companies should publicly disclose breaches in a timely manner.

This development perfectly aligns with the (cyber)consumers, providers and regulators (CPR) cycle (see figure 1) I propose in my recent Journal article, which basically necessitates participation from 3 key players—cyberinsurance providers, consumers and regulators. This conglomerative effort not only improves addressing and estimating cybersecurity risk from an insurance coverage perspective but also minimizes cataclysmic breaches. Providers need to be able to identify the right amount of cyberrisk that they are willing to undertake to provide ideal pricing for the coverage. This, in turn, depends on the consumers themselves to quantitatively know how much risk they own.

Figure 1: CPR Cycle
Figure 1: CPR Cycle

Today, there are numerous ever-evolving  cyberthreats (e.g., zero-day, Internet of Things botnet distributed denial-of-service attacks, ransomware attacks) that result in costs that are not inherently covered by most cyberinsurance policies. Above all this, cyberinsurance has always been an add-on policy to traditional insurance policies. Historically, insurance companies relied on abundant data to make decisions on how much auto or home insurance coverage may be offered to a person or entity. In the cyberworld, the common backlash by both the providers and consumers is that there are not enough data to rely on.

Heat maps continue to be a staple resource for IT risk professionals to estimate risk worldwide. In my experience of performing security risk assessments, I always had a disconcerted feeling leveraging heat maps to estimate risk quantitatively. Turns out, there are better and proven statistical and probabilistic methods that can be adopted to quantitatively estimate cyberrisk (in monetary figures rather than in colors of red, yellow or green), especially when there is a dearth of data. An organization’s emphasis should be on addressing the burgundy arrows in the CPR cycle, and my recent Journal article provides an overview of these methods, potential benefits and references in attaining these goals.

The purpose of attempting cyberinsurance CPR is to build a continuously maturing ecosystem comprising:

  • Cyberinsurance providers—Who will be able to provide coverage on the amount of risk they think they are undertaking and not providing a surplus
  • Cyberinsurance consumers—Organizations having the prowess to estimate risk accurately, which enables them to transfer risk to the insurance providers at optimal pricing while covering bases when a breach ensues
  • Regulators—To impose better and more timely breach reporting procedures on providers and consumers and enforce organizations to continuously adopt robust security and privacy practices

Read Indrajit Atluri’s recent Journal article:
Why Cyberinsurance Needs Probabilistic and Statistical Cyberrisk Assessments More Than Ever,” ISACA Journal, volume 2, 2018.

1 - 10 Next