Appendices

Please provide the information below to view the online Verizon Data Breach Investigations Report.

The information provided will be used in accordance with our terms set out in our Privacy Notice. Please confirm you have read and understood this Notice.

By submitting the form, you are agreeing to receive insights, reports and other information from Verizon and affiliated companies in accordance with our Privacy Policy. California residents can view our California Privacy Notice.

Verizon may wish to contact you in the future concerning its products and/or services. If you would like to receive these communications from Verizon, indicate by selecting from the dropdown menu below. Please note that you can unsubscribe or update your preferences at any time.

Indicates a required field. The content access link will be emailed to you.

View only

Thank You.

Gracias.

You will soon receive an email with a link to confirm your access, or follow the link below.

Download this document

Gracias.

You may now close this message and continue to your article.

  • Appendix A: Methodology

  • One of the things readers value most about this report is the level of rigor and integrity we employ when collecting, analyzing, and presenting data.

    Knowing our readership cares about such things and consumes this information with a keen eye helps keep us honest. Detailing our methods is an important part of that honesty.

    First, we make mistakes. A column transposed here; a number not updated there. We’re likely to discover a few things to fix. When we do, we’ll list them on our corrections page: verizon.com/business/resources/reports/dbir/2022/corrections/.

    Second, we check our work. The same way the data behind the DBIR figures can be found in our GitHub repository32, as with last year, we’re also publishing our fact check report there as well. It’s highly technical, but for those interested, we’ve attempted to test every fact in the report.

    Third, science comes in two flavors: creative exploration and causal hypothesis testing. The DBIR is squarely in the former. While we may not be perfect, we believe we provide the best obtainable version of the truth, (to a given level of confidence and under the influence of biases acknowledged below). However, proving causality is best left to randomized control trials. The best we can do is correlation. And while correlation is not causation, they are often related to some extent, and often useful.

    Non-committal Disclaimer

    We would like to reiterate that we make no claim that the findings of this report are representative of all data breaches in all organizations at all times. Even though we believe the combined records from all our contributors more closely reflect reality than any of them in isolation, it is still a sample. And although we believe many of the findings presented in this report to be appropriate for generalization (and our conviction in this grows as we gather more data and compare it to that of others), bias exists.

    The DBIR Process

    Our overall process remains intact and largely unchanged from previous years33. All incidents included in this report were reviewed and converted (if necessary) into the VERIS framework to create a common, anonymous aggregate data set. If you are unfamiliar with the VERIS framework, it is short for Vocabulary for Event Recording and Incident Sharing, it is free to use, and links to VERIS resources are at the beginning of this report. The collection method and conversion techniques differed between contributors. In general, three basic methods (expounded below) were used to accomplish this:

    1. Direct recording of paid external forensic investigations and related intelligence operations conducted by Verizon using the VERIS Webapp
    2. Direct recording by partners using VERIS
    3. Converting partners’ existing schema into VERIS

    All contributors received instruction to omit any information that might identify organizations or individuals involved. 

    Some source spreadsheets are converted to our standard spreadsheet formatted through automated mapping to ensure consistent conversion. Reviewed spreadsheets and VERIS Webapp JavaScript Object Notation (JSON) are ingested by an automated workflow that converts the incidents and breaches within into the VERIS JSON format as necessary, adds missing enumerations, and then validates the record against business logic and the VERIS schema. The automated workflow subsets the data and analyzes the results. Based on the results of this exploratory analysis, the validation logs from the workflow, and discussions with the partners providing the data, the data is cleaned and re-analyzed. This process runs nightly for roughly two months as data is collected and analyzed.

    Incident Data

    Our data is non-exclusively multinomial, meaning a single feature, such as “Action,” can have multiple values (i.e., “social,” “malware” and “hacking”). This means that percentages do not necessarily add up to 100%. For example, if there are 5 botnet breaches, the sample size is 5. However, since each botnet used phishing, installed keyloggers, and used stolen credentials, there would be 5 social actions, 5 hacking actions, and 5 malware actions, adding up to 300%. This is normal, expected, and handled correctly in our analysis and tooling.

    Another important point is that when looking at the findings, “unknown” is equivalent to “unmeasured”. Which is to say that if a record (or collection of records) contains elements that have been marked as “unknown” (whether it is something as basic as the number of records involved in the incident, or as complex as what specific capabilities a piece of malware contained) it means that we cannot make statements about that particular element as it stands in the record—we cannot measure where we have too little information. Because they are ‘unmeasured’, they are not counted in sample sizes. The enumeration “Other” is counted, however, as it means the value was known but not part of VERIS (or not one of the other bars if found in a bar chart). Finally, “Not Applicable” (normally “NA”) may be counted or not counted depending on the claim being analyzed.

    This year we have again made liberal use of confidence intervals to allow us to analyze smaller sample sizes. We have adopted a few rules to help minimize bias in reading such data. Here we define ‘small sample’ as less than 30 samples.

    1. Samples sizes smaller than five are too small to analyze
    2. We won’t talk about count or percentage for small samples. This goes for figures too and is why some figures lack the dot for the median frequency. 
    3. For small samples we may talk about the value being in some range, or values being greater/less than each other. These all follow the confidence interval approaches listed above

    Incident Eligibility

    For a potential entry to be eligible for the incident/breach corpus, a couple of requirements must be met. The entry must be a confirmed security incident defined as a loss of Confidentiality, Integrity, or Availability. In addition to meeting the baseline definition of “security incident” the entry is assessed for quality. We create a subset of incidents (more on subsets later) that pass our quality filter. The details of what makes a “quality” incident are:

    • The incident must have at least seven enumerations (e.g. threat actor variety, threat action category, variety of integrity loss, et al.) across 34 fields OR be a DDoS attack. Exceptions are given to confirmed data breaches with less than seven enumerations
    • The incident must have at least one known VERIS threat action category (hacking, malware, etc.)

    In addition to having the level of details necessary to pass the quality filter, the incident must be within the timeframe of analysis, (November 1, 2020 to October 31, 2021 for this report). The 2021 caseload is the primary analytical focus of the report, but the entire range of data is referenced throughout, notably in trending graphs. We also exclude incidents and breaches affecting individuals that cannot be tied to an organizational attribute loss. If your friend’s laptop was hit with Emotet it would not be included in this report.

    Lastly, for something to be eligible for inclusion into the DBIR, we have to know about it, which brings us to several potential biases we will discuss next.

    Acknowledgement and analysis of bias

    Many breaches go unreported (though our sample does contain many of those). Many more are as yet unknown by the victim (and thereby unknown to us). Therefore, until we (or someone) can conduct an exhaustive census of every breach that happens in the entire world each year (our study population), we must use sampling. Unfortunately, this process introduces bias.

    The first type of bias is random bias introduced by sampling. This year, our maximum confidence is +/- 0.7% for incidents and +/- 1.4% for breaches, which is related to our sample size. Any subset with a smaller sample size is going to have a wider confidence margin. We’ve expressed this confidence in the complementary cumulative density (slanted) bar charts, hypothetical outcome plot (spaghetti) line charts, quantile dot plots and pictograms.

    The second source of bias is sampling bias. We strive for “the best obtainable version of the truth” by collecting breaches from a wide variety of contributors. Still, it is clear that we conduct biased sampling. For instance, some breaches, such as those publicly disclosed, are more likely to enter our corpus, while others, such as classified breaches, are less likely.

    The below four figures are an attempt to visualize potential sampling bias. Each radial axis is a VERIS enumeration and we have stacked bar charts representing our data contributors. Ideally, we want the distribution of sources to be roughly equal on the stacked bar charts along all axes. Axes represented by only a single source are more likely to be biased. However, contributions are inherently thick tailed, with a few contributors providing a lot of data and a lot of contributors providing a few records within a certain area. Still, we mostly see that most axes have multiple large contributors with small contributors adding appreciably to the total incidents along that axis.

  • You’ll notice a rather large contribution on many of the axes. While we’d generally be concerned about this, they represent contributions aggregating several other sources, so not actual single contributions. It also occurs along most axes, limiting the bias introduced by that grouping of indirect contributors.

    The third source of bias is confirmation bias. Because we use our entire dataset for exploratory analysis, we cannot test specific hypotheses. Until we develop a collection method for data breaches beyond a sample of convenience this is probably the best that can be done.

    As stated above, we attempt to mitigate these biases by collecting data from diverse contributors. We follow a consistent multiple-review process and when we hear hooves, we think horse, not zebra.34 We also try to review findings with subject matter experts in the specific areas ahead of release. 

    Data Subsets

    We already mentioned the subset of incidents that passed our quality requirements, but as part of our analysis there are other instances where we define subsets of data. These subsets consist of legitimate incidents that would eclipse smaller trends if left in. These are removed and analyzed separately, though they may not be written about if no relevant findings were, well, found. This year we have two subsets of legitimate incidents that are not analyzed as part of the overall corpus:

    1. We separately analyzed a subset of web servers that were identified as secondary targets (such as taking over a website to spread malware)
    2. We separately analyzed botnet-related incidents

    Both subsets were separated out the last five years as well. Finally, we create some subsets to help further our analysis. In particular, a single subset is used for all analysis within the DBIR unless otherwise stated. It includes only quality incidents as described above and excludes the two aforementioned subsets.

    Non-incident data

    Since the 2015 issue, the DBIR includes data that requires analysis that did not fit into our usual categories of “incident” or “breach.” Examples of non-incident data include malware, patching, phishing, DDoS, and other types of data. The sample sizes for non-incident data tend to be much larger than the incident data, but from fewer sources. We make every effort to normalize the data, (for example weighting records by the number contributed from the organization so all organizations are represented equally). We also attempt to combine multiple partners with similar data to conduct the analysis wherever possible. Once analysis is complete, we try to discuss our findings with the relevant partner or partners so as to validate it against their knowledge of the data.

  • Appendix B: VERIS and Standards

  • While the DBIR is celebrating its 15th birthday, VERIS (the data standard underlying the DBIR) is creeping up to the ripe old age of 12. The standard was born out of necessity as a means of cataloging in a repeatable manner the key components of an incident.  This enables analysis of what happened, who was impacted and how it occurred. Since then it has grown and matured into a standard that can be adopted by many different types of stakeholders. VERIS is tailored to be a standard that provides not only ease of communication, but also a connection point to other industry standards and best practices, such as the Center for Internet Security (CIS) and MITRE Adversary Tactics, Techniques & Common Knowledge (ATT&CK). We realize that there isn’t going to be one universal framework (or language or well, anything) to rule them all, but we certainly believe in the importance of peaceful co-existence between all the frameworks that have enabled the growth of this community. Below are some of the great projects and connection points that exist with VERIS and it is our hope that the standard will bring more players to the cybersecurity table. 

  • CIS Critical Security Controls

    The CIS Critical Security Controls (CIS CSC)35 are a community-built, prioritized list of cybersecurity best practices that help organizations of different maturity levels protect themselves against threats. The CIS CSC aligns well with VERIS, as the DBIR is built to help organizations catalogue and assess cybersecurity incidents. This mapping connects the dots between the bad things that are happening and things that can help protect the organizations. Since 2019 we’ve published a mapping document that can help organizations crosswalk the patterns that are most concerning to them with the Safeguards that can protect them from the attacks within those patterns. Within each industry section, organizations can find the Implementation Group 1 set of controls, as the starting point for all organizations regardless of capability, based on the top patterns for that industry. 

    MITRE ATT&CK

    MITRE’s (ATT&CK) has become one of the defining ways of capturing technical tactical information in terms of what attackers do as part of their attack process. This rich dataset not only includes the specific techniques, but also the software, groups and mitigations associated with each technique, and as of earlier this year, the associated VERIS components. To assist organizations with translating the technical tactical information into strategic insights, Verizon collaborated with partners at the Center for Threat Informed Defense (CTID) and created the official VERIS to ATT&CK mapping, available free of charge to anyone with an internet connection.36 The mapping data is represented in Structured Threat Information eXpression (STIX) format,37 includes tools and scripts to update the mapping and also has a visualization layer that can be imported into ATT&CK Navigator. By providing this mapping, we hope that the various stakeholders of the organization can communicate and share their needs in a consistent fashion. 

    Attack Flow

    One thing you may notice in the DBIR is that outside of a few small areas such as the Timeline section, we do not discuss the path the attack takes. This is because non-atomic data (like paths and graphs of actions) is really hard, for us and the rest of the information security ecosystem. Whether it’s describing breaches, writing signatures, creating repeatable pen tests or control validations, or communicating to leadership, attack paths and graphs are difficult to create, share, and analyze.

    The DBIR team, with MITRE CTID and its participants, hope to change that with the Attack Flow project. Att&ck Flow is a data schema for capturing both the causal path of an attack as well as the contextual data around it as it “flows”. Because breaches fan out and then come back together, go down a path and come back to a server, Attack Flow supports arbitrary graphs of actions and assets interacting. Because we all need to know different things about the attack, it uses a knowledge graph structure to capture the context of the flow. And because we all organize differently, it supports multiple namespaces. So, for example, you could use VERIS Actions, Mitre ATT&CK actions, organization specific actions, or even a combination of all of the above as part of your attack path analysis.

    With Att&ck Flow, we now have a format we can use to share non-atomic data. Digital Forensics and Incident Response (DFIR) folks could document an incident as a flow. Detection vendors could use it to create a signature. Control validation tools can use it to simulate the incident. The Security Engineering team can use it to build attack surface graphs and plan mitigations. And they all can use the structure to create communications to leadership, and all be able to share the same underlying data with each other in a standardized but flexible structure. If that sounds like something you could get behind, check out the MITRE project at https://github.com/center-for-threat-informed-defense/attack-flow and the DBIR team’s graph based tools for working with it at https://github.com/vz-risk/flow

  • Appendix C: Changing behavior

  • In 2021 we reported that the human element impacted 85% of breaches, which decreased slightly to 82% this year.  Unfortunately, strong asset management and a stellar vulnerability scanner aren't going to solve this one.

    Instead, you're going to need to change the behavior of humans, and that is quite an undertaking.  Regardless of how you plan on doing it (be it giving them a reason to change, providing training or a combination of the two), you will need a way to tell if it worked, and that normally means running a test.  Here's a cheat-sheet of things that your internal department or vendor who is responsible for conducting the training should provide to you so you can determine if it is paying off:

    • A population of people you are interested in. (If the test was run only in a healthcare company or in a specific division, the population should be “Employees of healthcare companies” not “Anyone”)
    • A measurable outcome that can be proven or disproven. (Such as “More correct answers on a questionnaire about phishing delivered 1 day after training,” “ Fewer people clicked the phishing email,” etc.)
    • An intervention to test to see if it changes the outcome. (“Watch a 15-minute video on not clicking phishing.”)
    • A control that provides a baseline for the outcome. (“Received no training,” “Read a paragraph of text about phishing,” “Read a comic book and took a nap, etc.)
    • Random assignment. (“200 employees were picked to participate in the study. 100 were randomly assigned the control and 100 were randomly assigned the intervention”)
    • The conditions of the test should also be shared. (“Participants were sent the control or intervention via the company training tool as annual mandatory training.”)

    This test may be something run specifically for you or a test the trainer already ran. As long as the population, outcome, etc. are close to yours, it doesn’t matter. 

  • The manner in which the results are reported is just as important as how the testing was conducted. Here are a few things you should expect in the results:

    • result with a confidence interval. (If 10 folks in the control clicked the phishing email and only 1 in the intervention, 10-1 = 9 people we thought would click, but didn’t. That’s a 9/10 or 90% effectiveness.) But that 1 number doesn’t tell the whole story. What if some of those were flukes? The result should come with a range (such as 70% to 100% effective at 95% confidence) similar to the DBIR ranges. (Btw, if the range includes 0%, there’s a chance the training didn’t actually do anything.) If the outcome question was yes/no, then just a confidence level will do. (“People changed due to the intervention with 98% confidence”.)
    • Since results can vary over time, you should know when the result was measured. “The result was 30 days after the intervention.”
    • What if some folks just refused to take the training? That’s called dropout and is important to the results. The results should show who dropped out (preferably by full name so they can be shamed in front of their peers – okay, not really). (“Twenty percent of technically savvy employees didn’t take the intervention training, while only 2% didn’t take the control training.”) Dropout can occur for any of the other characteristics recorded (industry, world region, department, age, etc.) and if major differences are found, the results should be broken out by those characteristics as well. (“People changed 98% in tech savvy folks, but only 70% in non-tech savvy.”)
    • There should also be qualitative questions. Sometimes you don’t know what you don't know.38 In that case, there should be an open-ended question about the training in addition to the more objective outcomes. It also gives a chance to ask questions like “Do you have formal education or a job in a computer-related field?” And you can see the importance of this in the dropout bullet above. Asking “Is there anything else you’d like us to know?” might reveal that while the training was effective, many folks found it highly offensive.

    Unfortunately, there’s no way to guarantee something works. But, if you’re getting the information above (Population, Outcome, Intervention, Control, Random, Conditions) and (Results, When, Dropout, Qualitative) you can be reasonably confident you’re getting something for your effort. So, remember those easy acronyms: POICRC and RWDQ!

  • Appendix D: U.S. Secret Service

  • Evolution of Investigative Methodology to Thwart an Everchanging Cybercriminal Landscape

    The ways in which we live, work, and interact with each other has changed dramatically the last few years. The increased use of Internet platforms during the COVID-19 pandemic clearly demonstrates our growing economic dependence on information technology, and with that increased risk of cybercrime. Transnational cyber criminals continue to expand their capabilities, and their ability to cause harm—regardless of if they are financially or politically motivated.

    In terms of criminal activity, 2021 experienced a growth in crimes involving cryptocurrencies. This includes digital extortion schemes (including ransomware), theft of credentials or private keys that control substantial value in digital assets, manipulation of decentralized finance (DeFi) systems, and new money laundering methods that enable a wide variety of illicit activity. Transnational criminals are increasingly using cryptocurrency and other digital assets, rather than traditional physical assets or the intermediated financial system. Cryptocurrencies even found their way into Superbowl advertisements. What was once a niche market is now a growing part of modern life – investing, trading, and for illicit activity as well.

    Since its creation in 1865, the U.S. Secret Service has continuously evolved its investigative strategies and methods to protect our nation’s financial system. We are no longer chasing counterfeiters on horseback but are now focused on preventing cyber fraud by identifying and arresting cybercriminals worldwide. In 2010, when the Secret Service first joined in developing the DBIR, the foremost risk we were seeing was the theft of payment card and PII data for use in fraud. As this year’s report shows that risk is still present, but we are seeing development of new schemes by those who illicitly exploit the Internet.

    To keep pace with evolving criminal activity, the U.S. Secret Service focuses on partnering to enable businesses and law enforcement to take effective actions to mitigate risk. The DBIR is a key part of this—providing recommendations derived from analysis of aggregated incident reports. We also aid our partners and prevent cyber incidents through the work of our Cyber Fraud Task Forces, and the over 3,000 state, local tribal and territorial (SLTT) law enforcement personnel we trained at the National Computer Forensics Institute (NCFI) in FY 2021. We coordinate these activities globally through a dedicated group of investigators in our Global Investigative Operations Center (GIOC) focused on achieving the most effective outcomes—from recovering and returning stolen assets to victims to apprehending those responsible.

    This past year clearly demonstrated the increasing impact ransomware is having on businesses, critical infrastructure, and national security. The most prolific ransomware networks are Russian-speaking, though this crime is not limited to one country or region. According to one industry estimate, 74% of ransomware payments were Russian affiliated. We have also seen the use of destructive malware, which is functionally similar to ransomware, but lacks a means for payment. This dynamic, coupled with the limited cooperation of some states in countering ransomware, illustrates a growing risk which blurs distinctions politically and financially motivated cybercrimes. This risk reinforces why partnership is essential in improving cybersecurity by both improving the resilience of computer systems and apprehending the threat actors.

    Despite transnational cybercrime being a daunting challenge, the U.S. Secret Service relentlessly pursues these cases. In 2021, the Secret Service led or participated numerous multi-national operations to counter cyber criminal networks. For example, we conducted a multinational operation with Dutch Police and Europol to arrest multiple individuals responsible for ransomware attacks affecting over 1,800 victims in 71 countries. In total, Secret Service responded to over 700 network intrusions and prevented over $2.3 billion in cyber financial losses last fiscal year.

    Identity theft and fraud continues to be a core activity of transnational cyber criminals—it provides a means to convert stolen personally identifiable information into profit. The COVID-19 pandemic created new opportunities for this sort of fraud, as criminals defrauded relief programs. In response, the U.S. Secret Service named a National Pandemic Fraud Recovery Coordinator to focus on partnering with financial institutions to prevent and recover fraudulent payments. These efforts resulted in the U.S. Secret Service recovering more than $1.2 billion, the return of more than $2.3 billion of fraudulently obtained funds, and over 100 arrests.

    As the world becomes more digitized, in addition to being connected to each other through technology, we are connected to a wide array of devices, such as the internet of things (IoT). Other emerging technologies that may soon be the targets of cybercriminals include quantum cryptography, 5G wireless technology and Artificial Intelligence (AI). These technologies have the potential to improve lives and open new lines of communication. Conversely, cybercriminals will seek ways to use these technologies for malicious gains. The U.S. Secret Service, while focused on thwarting criminal activity today, has already started to train and prepare for the cybercrimes of the future.

    Preventing cybercrime requires a multi-pronged strategy including increasing cybersecurity resilience and pursuing criminals and seizing illicit gains to deter and prevent future crimes. Both of these efforts are strengthened by the analysis of aggregated incident reports, and evidence-based recommendations the DBIR provides. In 2022, the U.S. Secret Service looks forward to further strengthening our partnerships, to stay ahead of our changing use of technology, the efforts of criminals to exploit it, and ensure there is no safe place for cyber criminals to hide. 

David M. Smith
Assistant Director
U.S. Secret Service

Jason D. Kane
Special Agent in Charge
Criminal Investigative Division
U.S. Secret Service

The U.S. Secret Service, while focused on thwarting criminal activity today, has already started to train and prepare for the cybercrimes of the future.

  • Appendix E: Ransomware Pays

  • In past reports, we have talked at length about the cost of ransomware and other breaches to victims. However, we have never examined what the economics look like for the threat actor. This alternate point of view might provide some useful insights.

    To that end, we have combined the value chain targeting and distribution data, phishing test success rate data, criminal forum data, and ransomware payment data to estimate what the business looks like from the criminal’s side.

    First, let’s examine the cost of access. Figure 121 illustrates the cost of hiring (criminal) professional services to do the actor’s dirty work. These (and larger criminal organizations with internal staff for access) are likely going for riskier, bigger payout attacks.

    The small-time criminal is less of a techie and more of a manager. They are trying to minimize costs so will not invest in professional services. Instead, they buy access products outright in the form of credentials, emails for phishing, vulnerabilities, or botnet access. Figure 122 gives an idea of the costs. Instead of $100,000, the majority of access doesn’t even cost a dollar. This is because most access is email which is incredibly cheap, even when the median click rate is only 2.9%. While purchasing access directly in the form of access to a bot, login credentials, or knowledge of a vulnerability are also included, it’s email that steers the ship.

  • Contrast Figure 122 with the profits in Figure 123. Sixty percent of ransomware incidents had no profit and aren’t shown in the figure. A large portion had a profit near one dollar. However, the median is just over $100. Figure 124 shows what those same profits look like over time. After 300 simulated ransoms the actor has over $600,000 in income.39

    To see if this was an anomaly, we simulated 500 ransomware actors and 1.4% of them showed a loss.40 However, the median profit after 300 incidents was $178,465, with the top simulated earner making $3,572,211.

    The takeaway is that ransomware is more of a lottery41 than a business. You gamble on access, win the lottery 40% of the time, and get a payout from a few bucks to thousands of dollars. But for something more actionable, focus on the access. If an actor has to pay for services to break in rather than just an access product, you’ve made yourself much less of a target. Use antivirus to remove bots, implement patching, filtering, and asset management to prevent exposed vulnerabilities and standardize two-factor authentication and password managers to minimize credential exposure. Lastly, with email being the largest vector, you can’t ignore the human element. Start with email and web filtering followed by training. (See the Changing Behavior Appendix for a recommendation on how to tell if your training is working.)

  • Appendix F: Contributing Organizations


  • A
    Akamai Technologies
    Ankura
    Anomali
    Apura Cybersecurity Intelligence
    AttackIQ
    Atos


  • B
    Bad Packets
    Bit-x-bit
    BitSight
    BlackBerry Cylance


  • C
    Center for Internet Security
    CERT Division of Carnegie
    CERT European Union
    Checkpoint Software Technologies Ltd.
    Chubb
    Coalition
    Computer Incident Response Center Luxembourg (CIRCL)
    Coveware
    CrowdStrike
    Cybersixgil
    Cybercrime Support Network
    Cybersecurity and Infrastructure Security Agency (CISA)
    CyberSecurity Malaysia, an agency under the Ministry of Communications and Multimedia (KKMM)
    Cybir (formerly DFDR Forensics)


  • D
    Defense Counterintelligence Security Agency (DCSA)
    Dell
    Digital Shadows
    Domain Tools (formerly Farsight Security)
    Dragos, Inc
    DomainTools (formerly Farsight Security)


  • E
    EASE (Energy Analytic Security Exchange)
    Edgescan
    Elevate Security
    Emergence Insurance
    EUROCONTROL


  • F
    Domain Tools (formerly Farsight Security)
    Financial Services ISAC (FS-ISAC)
    Federal Bureau of Investigation - Internet Crime Complaint Center (FBI IC3)
    Fortinet


  • G
    Global Resilience Federation
    Grey Noise


  • H
    HackedEDU
    Hasso-Plattner Institut


  • I
    Irish Reporting and Information Security Service (IRISS-CERT)


  • J
    Jamf
    JPCERT/CC


  • K
    K-12 SIX—(K-12 Security Information Exchange)
    Kaspersky
    KnowBe4
    Kordamentha


  • L
    Lares Consulting
    Legal Services - ISAO
    LMG Security
    Lookout


  • M
    Malicious Streams
    Maritime Transportation System ISAC (MTS-ISAC)
    Micro Focus
    mnemonic


  • No
    NetDiligence®
    NETSCOUT
    NINJIO Cybersecurity Awareness Training


  • P
    Palo Alto Networks
    ParaFlare Pty Ltd
    Proofpoint
    PSafe


  • Q
    Qualys


  • R
    Ransomwhere
    Recorded Future


  • S
    S21sec
    SecurityTrails
    Shadowserver Foundation
    Shodan
    SISAP - Sistemas Aplicativos
    Swisscom


  • U
    U.S. Secret Service


  • V
    VERIS Community Database
    Verizon Cyber Risk Programs
    Verizon DDoS Shield
    Verizon Mobile Security Dashboard
    Verizon Network Operations and Engineering
    Verizon Professional Services
    Verizon Sheriff Team
    Verizon Threat Intelligence Platform
    Verizon Threat Research Advisory Center (VTRAC)
    Vestige Digital Investigations


  • W
    WatchGuard Technologies


  • Z
    Zscaler

Let's get started.