Beware of outsider threats to security involving AI, according to the HISS survey report.

image

Nearly one in three care companies formally forbids their employees from using AI. Third permit AI use as long as control approves the models. Just 16 % of Americans completely oppose AI. &nbsp,

The results come from a recent study conducted by the Healthcare Information and Management Systems Society, or HIMSS. The researchers were interested in understanding the wider panorama of cybersecurity across medical, not just AI. However, the study review concludes that AI ends up consuming a sizable portion of real property. &nbsp,

273 healthcare cybersecurity professionals who work for providers ( 50 % ), vendors ( 23 % ), consulting firms ( 13 % ), government ( 8 % ), and other organizations ( 11 % ) both provided representative responses to the survey. &nbsp,

Respondents ranged from C-suite leaders ( 50 % ), to non-executive management ( 37 % ), and non-management ( 13 % ). &nbsp,

What they all shared was having some degree of responsibility for routine security activities or activities.

The Artificial part of the document has some features, as shown in the following. &nbsp,

Artificial application circumstances. &nbsp,

35 % of respondents said they used AI for technical tasks like support and data analytics, and 35 % for administrative tasks like cybersecurity and administrative tasks ( each 34 % ). &nbsp, HIMSS responses

As AI becomes more widespread, more AI technologies employ cases are anticipated for the future.

Artificial security measures. &nbsp,

Nearly half of the respondents stated that their organizations have approval procedures, while 42 % said they do not, compared to 47 % of respondents. 11 % of respondents were uncertain whether these techniques existed in their organizations. The writers make the comment:

An authorization procedure acts as a strategic guardrail by screening AI technologies before adoption, reducing the likelihood of unauthorised or improper use. Monitoring AI use serves as a sensitive fence, providing continued oversight of AI activities to identify and address potential abuse, conformity issues, or security risks.

Active surveillance of AI. &nbsp,

31 % of respondents said their companies actively monitor AI usage across devices and systems, whereas 52 % said they don’t and 17 % said they don’t know. &nbsp, HIMSS details out

Lack of monitoring “presents hazards,” including data breaches. To ensure healthy and responsible use of AI systems, there are strong surveillance techniques required.

suitable use guidelines. &nbsp,

48 % of respondents said they do not, and 10 % did not know that their healthcare organizations had written AUPs for AI. &nbsp, HMSS documents

An acceptable use policy establishes clear guidelines for the healthy and responsible use of AI, including such a policy, and can either be independent or integrated into a public policy depending on the organization’s adoption of AI. ‘&nbsp,

Potential cybersecurity issues involving AI. &nbsp,

Data privacy was identified as a top concern by 75 % of respondents, followed by data breaches ( 53 % ) and bias in AI systems ( 53 % ). 47 % of respondents were concerned about intellectual property theft and lack of transparency, while 41 % raised risks for patient safety. &nbsp, HIMSS writes:

These findings “underline the need for strong protection, social frameworks, and proactive steps to address the risks.”

AI and the risk of insiders. &nbsp,

A small proportion of respondents reported both negligent and malicious insider threat activity ( 3 % ) or both negligent and malicious insider threat activity ( 3 % ). According to HIMSS: &nbsp,

Although these figures may seem little, it is likely that some organizations have not yet implemented surveillance for insider threats driven by AI, leaving unpredicted risks.

The authors add:  , exacerbating the implied call to arms against insider threats.

The rising reliance on AI tools and systems opens up new opportunities for both careless and malignant insider activity, which may raise risks to sensitive data and functional integrity.

HIMSS doesn’t describe which geographical areas it covered in the survey, but it has offices in North America, Europe, the United Kingdom, the Middle East, and Asia-Pacific.

Download the full statement by clicking <a href="https://www.himss.org/resources/himss-healthcare-cybersecurity-survey/?_ga=2.116955142.1299106647.1740495350-174807845.1739547770″>here.

Leave a Comment