Imagine it is sometime in the 22nd century and that the future you is preparing for a complex surgical procedure at the local robot-run hospital, where it has become commonplace for robots to perform sophisticated, repeatable tasks, such as heart surgery, on human patients. This is the first time a robot is tackling a septal myotomy on a human, on you no less. It is still one of the most complicated medical procedures in the world almost 160 years after it was first performed, and it still takes up to 6 grueling hours for a human doctor to do, all the while nothing but a machine keeps you alive.
In the days leading up to the procedure, the chief robot doctor of the facility, Dr. Ava—named after a character in a cult classic film made more than a century before—and all but indistinguishable from a human except for the odd irregular whirring sound occurring whenever she looked up toward the sky, sat you down to share the nature of some of the quite considerable risk factors involved in the procedure. At one point, your eyes wandered to see a few framed diplomas hanging on the wall, including one from the renowned C-3P0 institute, from where Dr. Ava must have learned her diplomacy and her disarmingly reassuring doctor’s bedside manner.
Your eyes are then drawn to one from the Isaac Asimov Institute, named after one of the most famous 20th century scientists and author of the evergreen 1950's classic I, Robot. Recalling his works, you become distracted by thoughts of the 3 laws of robotics, how robots learn and whether they are sufficiently equipped to handle the variability that all too often occurs in complex medical procedures.
It is then that you begin to think about the quality of data required for a robot to learn, especially one performing something as delicate as heart surgery. Quite simply, even a small amount of bad data could mean death on the operating table under a robot; 1 micrometer too far to the right could be all it takes. You then become lost in flashbacks of a century before, from those holographic history “books” or holobooks you so enjoy interacting with, a time when AI practitioners were barely aware—some actively choosing to remain ignorant even—of the fact that data could be a kind of evil beyond their wildest dreams, a state of affairs that caused the nightmare on Earth otherwise known as the Blackening of the late 2030s.
The Blackening was a downstream outcome of the big data hype of a time near the start of the 21st century. It was a time of almost unconstrained data fusion, analytics, machine learning (ML) and robotics by many self-proclaimed “experts” using the primitive technology of the time to increase efficiencies and to supposedly better serve humankind. Little did they want to know that dirty data do to an algorithm what poison does to a man. It kills, sometimes slowly.
Furthermore, those holobooks taught you about a time around the mid 2010s when many humans had raised concerns about the future of human work and how robots would take over the world. Oh, how that crowd would chant “I told you so” if they were alive today. Warnings were sounded over the need to assess the quality of data for artificial intelligence (AI), including by that budding author Pearce, but the dirty data poison from decades of negligence, ignorance and technological debt leached into our robot helpers, ultimately leading them to run amok against us in scenes akin to that classic fiction Westworld. But alas, the siren’s call of power and profit was too strong. As a species, we did not actually think we would survive much beyond the middle of the 21st century. We were doomed, but there was a kind of h…
A faint whirring sound from Dr. Ava gently brings you back, and you ponder the Global Artificial Intelligence Act (GAIA) of 2078 and how it gave a new impetus to human life on our pale blue dot. In particular, it required that all production AI instances be able to demonstrate the quality of the data used. Not only that, it required strict evidence of from where the data came, how they were transported and how they were transformed. It required that the data used in AI be described in unambiguous human terms to ensure that data would only be used as intended. In essence, it required data to be tested and to ensure that controls were put in place to prevent poor data from contaminating the combined consciousness of humans and machines. After this demanding mental journey, you found yourself easing into a greater sense of peace and relaxation, a state vital for the success of the medical procedure to come. By the way, the cost of noncompliance with GAIA? Exile to that cold, barren martian moon Deimos.
So to all of you AI practitioners living back there in 2019, please make sure you read my recent Journal article to understand why data intended for AI should be the subject of critical assessment and data audits. Preventing the Blackening of the late 2030s is all in your hands.
Read Guy Pearce's recent Journal article:
"Data Auditing: Building Trust in Artificial Intelligence," ISACA Journal, volume 6, 2019.
Increasingly, security professionals use language that makes a distinct comparison between our physical environment and our digital infrastructures. We use terms such as “digital ecosystem,” “digital footprint,” “IT environment,” “data leakage” and “data pollution.” As data breaches continue to increase in number and severity, we need to begin thinking about how we protect today’s data for tomorrow’s future digital strategies.
What Is Cybersustainability?
Fundamentally, cybersustainability looks at data as a finite resource, similar to a coral reef or fossil fuels. Similarly, we can look at data from both the “prevent from being polluted” perspective and the “preserve the resource” perspective.
Although no official definition of cybersustainability exists, we use the following definition:
- Adopting/maturing digital transformation strategies
- Establishing access and governance policies that promote cyberhealth
- Continuous monitoring to maintain data privacy/security
- Communicating across stakeholders
- Promoting operational resiliency
Prevent Data Pollution
When we look at cyberecosystems, we discuss the problems associated with data leakage. Data leakage includes a variety of unauthorized data transfers from an organization’s systems, networks and software, including physical, digital and intellectual. For example, a user with excess access to information can choose to download the data or remember the data, both of which are considered data leaks.
Data pollution, in this case, means the way in which data can be accessed or changed within a digital ecosystem such that it impacts the information’s integrity, confidentiality and availability. In many ways, this definition aligns with the concept of a leaking underground storage tank. Homes heated with oil often have old, outdated oil storage tanks that leak the contaminant into the soil. In the same way, unauthorized access leaks data into the larger population, undermining privacy.
Preventing data pollution, therefore, requires organizations to control user access to information using the principle of least privilege.
Preserve Data as a Resource
On the other side of our cybersustainability equation, data are also a finite resource we need to preserve and protect. If we compare data to an environmental resource such as a coral reef, the similarities become more tangible. For example, coral reefs and the organisms that live in them must be protected because few of them still exist. They are finite environmental resources. Similarly, non-public personal data are finite resources. People only have one social security number or one birth date.
Protecting data as a resource, therefore, is imperative. Organizations need to protect and preserve non-public personally identifiable information (PII) because data compromises “deplete” the resource.
Protecting and preserving the integrity, confidentiality, and accessibility of data as a finite resource requires organizations to not only monitor for unauthorized external access to PII, but also internal excess access to it.
Why Identity Governance and Administration Enables Cybersustainability
The World Economic Forum defines the 4th Industrial Revolution as a fundamental change in the way people live, work and relate to one another arising from new technologies that advance the convergence of physical, digital and biological worlds.
As we evolve our technologies during this new Industrial Revolution, we need to create forward-thinking digital transformation strategies to prevent the pollution inherent in them. We should be learning from the physical environmental pollution created by factories to prevent similar damage to data arising from the 4th Industrial Revolution.
Thus, we need to look to the new perimeter—identity—to shape our digital transformation strategies. Relying on legacy identity management solutions leaves user data at risk. Protecting data as a finite resource and preventing data pollution relies on creating a risk-based, context-aware identity governance and administration (IGA) program.
Unfortunately, managing identity and access becomes difficult for organizations with complex IT ecosystems. Managing the proliferation of user identities—human and non-person—and the inundation of access requests across often disconnected dashboards creates both a human error risk and increased operational cost. To mitigate this risk and decrease these costs, organizations can incorporate intelligent analytics with predictive access capabilities.
Protecting Today’s Information for Tomorrow’s Technology
As we attempt to meet the rapid pace of modern technological changes, we need to focus on creating forward-thinking digital transformation strategies. We can learn from the mistakes of our predecessors who led previous Industrial Revolutions. By applying environmental sustainability theory to cybersecurity, we can better protect sensitive information long term and, ideally, prevent our advances from contaminating or depleting data resources.
Read Karen Walsh and Joe Raschke's recent Journal article:
"Sustainable Development for Digital Transformation," ISACA Journal, volume 5, 2019.
The trends appear to be presenting themselves all over the place; TV commercials, online ads, corporate product announcements, etc., are all saying the same thing: Artificial intelligence (AI) adoption and use are exploding. As an information security and assurance professional, I admit that I did not really know much about this emerging technology, so I decided to begin the process of becoming educated on the subject, even if only at an introductory level. I started performing online research to understand the current market size, future growth projections, how to achieve certification and education and, most important, approaches to governing and securing use of AI solutions.
My company presently allocates each employee a modest annual training budget, so I leveraged those funds to select a training provider and begin taking AI classes as I performed my research for my recent Journal article. I gravitated towards edX as their curriculum was 100% free, but also provided certificates after completing courses and quizzes, which is also useful for IT certification continuing professional education (CPE). As I completed my AI edX courses and online research, I wanted to structure my ISACA Journal article in a conversational and informative matter starting with defining AI and addressing some common misconceptions. From there, I wanted to address market size, projected growth trends and who the players are in the market. I believe this is always important because this information provides important context on what to expect in the near term and long term and which organizations to keep your eye on.
This biggest challenge I encountered was when it came to research and learning how to secure AI solutions. While AI is not new (it has been researched, discussed and developed over the past several decades), it has only become commercially adopted within the past 10 years and is still in its infancy. I did not find a free-to-use, mainstream security framework or set of publications that discussed how to directly approach AI security. Through several online articles and the completion of the introductory edX courses, I managed to Frankenstein the article together, and my hope is that at least 1 reader will learn something valuable that will assist or empower their enterprise in securing the use of AI solutions. My secondary challenge is, if you can write a related article or audit program, please do. It will benefit us all!
Read Adam Kohnke's recent Journal article:
"Preparing for the AI Revolution," ISACA Journal, volume 4, 2019.
It has become almost impossible to face cybersecurity issues just by using the presently available countermeasures; hackers always find aways to bypass them. Whatever the future state of technology, some information related to people and national security must be kept secret. To propose a viable response to this situation, Octosafes Inc. conceived a theoretical system based on 5 hypotheses and mathematic chaos laws. The 5 hypotheses are:
- A child born today can be identified and authenticated by a computer without using the child’s name or a numerical identifier (SSN).
- On a certain scale, e.g., micron (micrometer) or microsecond, it is impossible for 2 people or 2 objects to be exactly the same, e.g., identical twins, fingerprints or 2 sheets of paper in the same ream.
- To become safer or even impenetrable, information systems must obey new laws and new logic (other than Boolean logic).
- The computer can protect people by protecting itself.
- Based on the previous hypotheses, it is now possible to design information systems with limited compatibility, i.e., it is impossible for 2 computers to communicate if there has not been some “physical” interaction (remotely or not) between these 2 systems.
The 2 essential laws of chaos theory are:
- Some degree of uniformity and order can be found in apparently erratic and uncontrollable phenomena.
- A phenomenon that is very controllable and predictable can become very unpredictable over a long time period
Based on these observations, mathematicians were able to create patterns they called strange attractors. They have also discovered dimensions of space that are no longer whole and that can be replicated to infinity.
Our recent Journal article is based on these 5 hypotheses and on the integration of chaotic models in information technology. However, it is rare for a mathematical abstraction to totally fit with reality, so we used a type of stratagem to integrate chaotic processes into the digital card that is at the center of our IT security project. Because this card will be made billions of times and the spatio-temporal coordinates of each card are unique, any attempt to clone or reproduce this card is doomed to failure.
The actual structure of each card is revealed and stored in the authentication server (AS) using a microlaser scanning the surface of each card at a specific frequency, which is determined at the time of initialization, i.e., from the first card and AS interaction. This single reading at submillimetric scales and microsecond time slots will be similar to the outline of a beach (designed at a millimeter scale) after each ebb and flow of the waves. Each grain of sand that moves changes the outline of this beach, and its analysis even with the most sophisticated devices becomes complex or even chaotic. With regard to our card, these ebb and flow movements have been replaced by frequency variations. At a frequency x, the microlaser can be in a hollow, and at a frequency y, it can be on a bump. Because these hollows and bumps are imperceptible to sight and touch, it is impossible for a human to control them for wrongdoing. In addition to these obstacles coming from the physical structure of the card, others that are equally unpredictable can be added, such as the biometric and genetic data of the card owner, the variations in the time of the records, the corrections after writing some wrong information, etc.
By introducing chaos mathematic laws in cybersecurity, the hope is to initiate other logic and other electronic circuits that go beyond the Boolean algebra. In fact, the Boolean logic is still utilized in our system, but some other parameters (based on our 5 hypotheses) help to significantly modify this logic by introducing, for the first time, some notions such as “the computer can protect itself” or it is possible, thanks to the evolution of technology, to conceive “some systems with limited compatibility."
Read Jean Jacques Raphael, Jean Claude Célestin and Eric Romuald Djiethieu's recent Journal article:
"Chaos to the Rescue," ISACA Journal, volume 4, 2019.
How do you transform security and privacy compliance requirements into practical steps that can be executed by a team? It is not easy, especially in an Agile environment that wants to move quickly—to say there exists a gap between complying with policies, and actually executing tasks to that end is just the tip of the iceberg. The rest of the iceberg looks like this:
- Policies, regulations and standards are designed to be high-level and abstract. There are no simple steps to follow to meet them.
- Policy-to-execution (P2E) platforms are limited to technical steps for only the software development life cycle (SDLC).
- Regulatory bodies continue to publish new standards beyond the SDLC.
- Organizations may perceive security as a disruptor.
For instance, section 4.2 of the PCI-SSLC requires that "[n]ewly discovered vulnerabilities are fixed in a timely manner. The reintroduction of similar or previously resolved vulnerabilities is prevented."
This directive is tantamount to “perform security testing using techniques such as dynamic application security testing (DAST), static application security testing (SAST) and interactive application security testing (IAST),” but there is no indication about how to go about that. Even the most security-conscious developer would not know where to begin. The framework we propose in our Journal article tackles the gap we see here between the need to comply with a regulation and the lack of actionable tasks to do so.
Effectively translating this policy into actionable tasks requires research. We started with literature reviews of existing workflows and controls for security testing. We sought the following:
- The identification of gaps
- The analysis of gaps
- The definition of actionable steps
Next, we interviewed subject matter experts (SMEs). These are the kinds of questions we wanted answered:
- What are the existing methods in use for performing these tasks?
- How often should a task be performed?
- What are the relevant roles and responsibilities in your team?
We used these answers and criteria to create a list to:
- Determine processes to perform a task beyond simply the SDLC from beginning to end.
- Determine an owner for each task.
- Automate by integrating with DevOps tools.
Developing securely by design is the way forward, and as technology evolves, new controls and standards arise to meet the need to develop secure and compliant applications. Meeting those controls, however, is not straightforward without a consistent method to convert policy to procedure without colliding into an iceberg.
Read Mina Miri, Amir Pourafshar, Pooya Mehregan, Nathanael Mohammed's recent Journal article:
"Bridging the Gap between Policies and Execution in an Agile Environment," ISACA Journal, volume 4, 2019.
My recent Journal article on the Internet of Things (IoT) was inspired by an article I read on a botnet takedown that involved the digital recording devices that many people have connected to their television. It reminded me of the information security problems that came to light as new computer software was developed and used by many organizations and people. When the personal computer industry was in its infancy, there was no thought about misusing it (e.g., local denial-of-service attacks, adding malicious software to the computer). The only concern was getting it out in the marketplace and selling it. Information security and privacy were not a concern, device capabilities and features were.
We are in the same situation with IoT devices, as the basic components of the computer are the memory and processing chip, the software, and the storage device (i.e., hard or flash drive). IoT devices are very similar to if not actual computers. They have some type of data communications, they store programs, they process and store data, and they possess the capability/weakness of being misused.
In my article, I identify the botnet components, list many IoT device vulnerabilities and talk about the types of attacks (and actual security incidents) that have taken place against various IoT devices. I review the information security and privacy concerns of home, office, and personal IoT devices, and my article identifies many of the common concerns. Recommendations for organizations, IoT device manufacturers, and the home and business are also included.
My intent is to make you aware of the possible weaknesses in these devices and how they can be a threat to you, your family, your well-being and possibly your place of work.
Read Larry G. Wlosinski's Journal article:
"The IoT as a Growing Threat to Organizations," ISACA Journal, volume 4, 2019.
The explosion of DevSecOps has caused a lot of excitement and worry within the cybersecurity community. It is no longer of question of should an organization implement DevSecOps, but rather when and how? While the scope and complexity of DevSecOps may initially seem daunting to security professionals, there are a few important points that can be kept in mind to implement an effective DevSecOps programs that can enable an organization to increase the velocity of their software releases but remain secure at the same time:
- Remember that tools are your best friend. The speed of DevSecOps makes manual testing/review simply too cumbersome to be effective. Find out which security tools best fit into your delivery pipeline, and work with the teams to effectively integrate them so that your security controls are an integral part of the framework. At a bare minimum, you should be having secure code reviews and automated security scanning for every software deployment.
- Automate the decision-making process. One of the key things I realized during implementing security controls in DevSecOps is that none of the automated security testing mentioned previously will make any difference if decisions are not made immediately based on their results. Jobs need to intelligently succeed/fail based on security success criteria that the security professionals and developers need to sit together and define. Certain things will be showstoppers for which the developers will need immediate feedback, while others can be fixed later, but this decision-making framework needs to be automated, with immediate results being sent back to all relevant teams.
- There is no escape from coding. As much as I would like to say that every organization has enough budget to hire dedicated security professionals with deep coding experience, that is simply not realistic. DevSecOps often needs security professionals to roll up their sleeves and dig in to the code to find out why jobs are failing, application programming interface (API) calls are not being triggered, etc., and developers will get frustrated if security professionals are not able to provide answers for such problems. Investing in security training for developers and coding training for security professionals will reap huge dividends in the future and help break down silos, enabling a faster cultural shift to DevSecOps at the ground level.
Read Taimur Ijlal's recent Journal article:
"Three Strategies for a Successful DevSecOps Implementation," ISACA Journal, volume 4, 2019.
Unpatched systems represent a very serious IT security threat with potentially extremely important consequences, as documented in a large number of high-profile breaches that exploited known unpatched vulnerabilities. Since these vulnerabilities are known, not just to attackers, but also to system administrators, and since patches exist, it is on first look surprising that unpatched systems even exist. The reality, however, is that patching is not that simple: Because of interdependencies, it must be verified that the patch is compatible with everything else in the system, e.g., an operating system patch must be compatible with the applications and databases running on top of the operating system. Sometimes, they are not, as manifested, for instance, in the recent Spectre and Meltdown vulnerability, where some application providers explicitly warned against patching. Verifications mean testing by other vendors, and this may not be a high priority for the application vendor, with an answer or full solution sometimes coming with the next release. Today’s organizations typically employ a large number of systems and applications, and making sure all of them are patched promptly is not automatic.
In light of this situation, organizations need to bolster the first line of defense, i.e., do everything possible to ensure prompt patching and, in addition, prepare a second line of defense to deal with systems that cannot or will not be patched in a reasonable time frame. Such a strategy could entail:
- Involve high-level management who need to be aware of the risk and attempt to obtain contractual guarantees of prompt addressing of patch issues, whether in their system or application or in other systems their own systems depend on. Evaluate vendors in this respect.
- Establish a clear line of ultimate responsibility for patching. This involves appointing someone to monitor and assess the patching risk and empower that person to carry out this task. This involves, among others, an architectural map of the systems, their function, criticality and exposure (e.g., Internet-facing) plus interconnections, as well as a monitoring tool carrying out regular scans with respect to patching.
- Contact the vendors regarding patch testing, compatibility and availability, and possibly carry out tests internally if necessary.
- Propose blacklisting irresponsive vendors.
- Propose and implement (in cooperation with relevant company units) alternative mitigating measures in case patching is not possible in a reasonable time frame. Such measures could involving agents in the unpatched systems to block exploits (although unlikely to be accepted by the vendor), putting patched intermediate servers in the path to the Internet to inspect incoming traffic, and using web application firewalls (WAFs) or sandboxing-type solutions, always taking into account possible performance issues.
- Especially if one must live with unpatched systems, monitoring and responding to rogue activities gains importance.
Read Spiros Alexiou’s recent Journal article:
“Practical Patch Management and Mitigation,” ISACA Journal, volume 3, 2019.
The nature of risk management has changed over the past 2 decades. Previously isolated IT infrastructures are more connected with the outside world, and organizations face an ever-expanding threat landscape. Most organizations operate in a reactive mode, typically driven by an outside-in fear and avoidance approach where priorities are based on the latest known threat or new regulation. The challenge with this approach, in addition to it being reactionary and driven by outside forces, is that it promotes a keep-the-lights-on mentality, results in an inefficient use of resources and distracts from the priority of protecting an organization’s most critical data assets.
The motivation is primarily the fear of fines and reputational risk. For a security program to succeed and reduce information technology risk, a focus on driving business value by effectively mitigating risk wherever it may live is preferred.
The Risk IT Framework developed by ISACA includes the following core principle: Make IT risk management a continuous process and a part of daily activities.
This tenet is prescient because today’s threat landscape never sleeps. Digital transformation, SensorNet, cloud and DevOps are creating dramatically expanding attack surfaces. Attackers are constantly looking for a way in, and employees are finding new ways to accidentally expose sensitive information. Annual penetration tests or security reviews do not cut it. Regulatory-focused security programs cannot keep up. So how can organizations move from a reactionary approach to a proactive, risk-centric program?
Know your business—Understand what information is most important to the organization. Understand what information assets drive the business and need more protection. One-size-fits-all security is not effective and can add substantial costs when it is not warranted. Talk to internal department leaders and get to know how security programs can add value to their lines of business.
Conduct a comprehensive risk assessment—Doing so will uncover where gaps in your existing programs are against appropriate regulations, standards and best practices. An assessment will provide a risk model to help identify the most likely attackers, assets they are most likely to go after and the overall impact to the organization in case of an incident.
Do not stop at a checklist—While a thorough assessment will provide a list of items to be addressed, move beyond a simple checklist. Each identified gap should be surrounded by control, planning and continuous risk monitoring.
Information security and risk management are not easy fields in which to succeed. These 3 basic steps can help you start transforming your organization’s approach to cybersecurity. The benefits of doing so include reducing security technology clutter, minimizing operational expenditures, and creating a program that is business aligned and more effective at reducing risk.
Read Brian Golumbeck’s recent Journal article:
“Moving Risk Management From Fear and Avoidance to Performance and Value,” ISACA Journal, volume 3, 2019.
Managing cyberrisk is critically important for organizations. Interconnectedness, digitization, the focus on utilizing data and providing enhanced client experiences expand the attack surface and expose an organization to increased cyberrisk. I cannot think of a worse experience for a board member than to be told (or to read in a newspaper) that the organization’s client database has been leaked online, that a significant amount of money was stolen or that the organization cannot operate because all the servers have been locked up with ransomware. No organization can be 100% secure, and bad events will happen. There are, however, practical steps that can be taken to reduce the risk of a cyberevent happening and, when it does happen, to recover the organization to the same state as before the event.
The difficult question is where to start managing cyberrisk, especially if the organization is not yet focused on cyber. I would advise against just jumping in and start implementing cyberactions. The most important task to start with in my view is to create a cyberresilience program with executive support. This task can be quite difficult, but without executive support, a vehicle for all the tasks that must be done and a report to keep the board informed of the cyberjourney, cyberrisk management will be dead in the water, and the organization will just be waiting to become a victim of a cyberattack. This becomes more difficult especially in organizations that have not experienced a cyberevent. I will not be surprised if there are many organizations where the extent of cyberrisk management is a technical team buried in the IT department that focuses on hardware and security settings.
Although I mention it in my Journal article, I recommend doing a current-state cyberrisk assessment first. Procure the services of a respectable external consulting firm to do the assessment. Openness, transparency and honesty are the keywords for this step. The cyberpractitioner will know many of the things that are not in place in the organization and should provide that information to the assessment team. Once the attention and commitment of the board has been obtained with an external report, the next step is to create a cyberresilience program. In this step, focus on ranking the items discussed in my article over a period of 2 or 3 years. It will not be possible to do everything in year 1 as it will be too expensive, and the resources will not be immediately available to address everything in one go.
It is very important that the board understand cyberrisk; therefore, implementing board reporting and promoting executive awareness should be high in priority of the 3-year plan. It does not matter if the first report has lots of red items. The more informed the board is—especially board members from the business lines and, more importantly, if they understand the impact that a cyberevent can have on their organizations—the greater the chance will be of obtaining resources to implement the cyberplan. The board report is probably one of the most important tools a cyberpractitioner has and should be utilized effectively to manage cyberrisk and to describe the cyberjourney to the board. I am of the opinion that an organization should not attempt to describe the end-goal for cyber, but rather to describe the journey and that the right actions are being taken along the journey to reduce cyberrisk. The next step is to adopt a cybermaturity framework against which to measure the organization internally. Armed with these tools, the other steps in my article can be mapped out and implemented, e.g., identifying the crown jewels, threat modelling, determining if controls are adequate to protect critical points along the kill chain, red team testing, etc., and each item that is implemented will improve the organization’s cybermaturity.
Read Jaco Cloete’s recent Journal article:
“Practical Cyberrisk Management,” ISACA Journal, volume 3, 2019.
|1 - 10