5 things to know: Driving innovation with AI and hybrid cloud in the year ahead

Secure and Compliant AI for Governments

This is because the threat of AI attacks is not widely known, and as such, these critical assets are treated with lower security standards compared to “hard” assets, such as passwords, that are stored with high security standards and encryption. Critical applications that employ AI must adopt a set of best practices to harden the security of these assets. Many AI attacks are aided by gaining access to assets such as datasets or model details. In many scenarios, doing so will utilize traditional cyberattacks that compromise the confidentiality and integrity of systems, a subject well studied within the cybersecurity CIA triad. Traditional confidentiality attacks will enable adversaries to obtain the assets needed to engineer input attacks. Traditional integrity attacks will enable adversaries to make the changes to a dataset or model needed to execute a poisoning attack.

  • After deployment, companies and regulators should continually evaluate the harm caused by the systems, updating safeguards in light of new evidence and scientific developments.
  • To ensure that safety-critical AI systems are built on solid foundations, reducing the chance of accidents, widely used foundation models should be designed with a particular focus on transparency and ensuring they behave predictably.
  • As a result, agencies can train robust traffic models with advanced monitoring capabilities.
  • 4 min read – The use of AI by public entities, including the judiciary, should be anchored on the fundamental properties of trustworthy AI used by IBM.

Today, Salesforce unveiled new capabilities and compliance certifications across Customer 360 for Public Sector. Now, government organizations can modernize their services with pre-built, government-specific capabilities that meet stringent compliance requirements on a single automated and intelligent platform. About Ask Sage

Ask Sage is an AI-driven solution provider specializing in assisting government and commercial teams with data analysis, insights, and factual answers.

Why viAct is a pioneer in AI for Government & Public Sector?

Microsoft has been trying working hard to win the US government’s trust as a cloud provider – but it’s made missteps, too. Microsoft reports it encrypts all Azure traffic using the IEEE 802.1AE – or MACsec – network security standard, and that all traffic stays within its global backbone of more than 250,000km of fiber optic and undersea cable systems. Protect AI was named one of the Top 100 most promising artificial intelligence startups of 2023 by CB Insights.

Or, as you go deeper into AI like Microsoft Copilot, you can establish proper data security practices by enabling access controls and proper visibility into what your users can access to ensure that AI is only doing what you want. While generative AI like ChatGPT or diffusion models for imaging like DALL-E open various opportunities for efficiency and productivity, they can also be used to spread misinformation. This raises complex human social factors that require policy and possibly regulatory solutions. Interested in building enterprise AI applications that facilitate public sector operations? Similarly, the Department of Homeland Security, USA, uses EMMA, a virtual assistant catering to immigration services.

Managed IT Services Firm

Critical AI systems must restrict how and when the data used to build them is shared in order to make AI attacks more difficult to execute. Review and update data collection and sharing practices to protect against data being weaponized against AI systems. This includes formal validation of data collection practices and restricting data sharing. Conduct “AI Suitability Tests” that assess the risks of current and future applications of AI. These tests should result in a decision as to the acceptable level of AI use within a given application.

AI solves complex problems by analyzing data, recognizing patterns, and adapting based on the results. Domino’s hardened cloud-native Kubernetes-based solution is designed to deliver agility while minimizing your threat surface. While steps taken by governments demonstrate their commitment to ensuring robust safeguards, further efforts should be made continuously. The evolving nature of technology requires ongoing adaptation of policies, resilience building against emerging risks, and regular updates to existing frameworks.

The Director shall additionally consult with agencies, as appropriate, to identify further opportunities for agencies to allocate resources for those purposes. The actions by the Director shall use appropriate fellowship programs and awards for these purposes. (ii)  a public report with relevant data on applications, petitions, approvals, and other key indicators of how experts in AI and other critical and emerging technologies have utilized the immigration system through the end of Fiscal Year 2023. (i)    The Secretary of Defense shall carry out the actions described in subsections 4.3(b)(ii) and (iii) of this section for national security systems, and the Secretary of Homeland Security shall carry out these actions for non-national security systems. Each shall do so in consultation with the heads of other relevant agencies as the Secretary of Defense and the Secretary of Homeland Security may deem appropriate. Such reports shall include, at a minimum, the information specified in subsection 4.2(c)(i) of this section as well as any additional information identified by the Secretary.

Why is artificial intelligence important in government?

By harnessing the power of AI, government agencies can gain valuable insights from vast amounts of data, helping them make informed and evidence-based decisions. AI-driven data analysis allows government officials to analyze complex data sets quickly and efficiently.

In the age of data-driven decision making, Artificial Intelligence (AI) is becoming increasingly integral to businesses of all sizes and industries, as well as government agencies. However, this burgeoning dependence on AI also raises concerns about data privacy, ethics and compliance. This is where Collibra AI Governance steps in, playing a crucial role in ensuring responsible, ethical and effective AI implementations. These standards should likely change substantially over time as we learn more about the risks from the most capable AI systems and the means of mitigating those risks. For example, AI-generated images and videos should be watermarked to allow users to know when they are engaging with AI-generated content. Companies could also commit to making significant investments—say, spending at least 20 percent of their budgets—to improve the safety of their systems.

In conjunction with apps that could be made to allow for the automation of AI attack crafting, the availability of cheap computing hardware removes the last barrier from successful and easy execution of these AI attacks. An Uber self-driving car struck and killed a pedestrian in Tempe, Arizona when the on-board AI system failed to detect a human in the road.52 While it is unclear if the particular pattern of this pedestrian is what caused the failure, the failure manifested itself in the exact same manner in which an AI attack on the system would. This real-world example is a terrifying harbinger of the ability for adversaries who are deliberately trying to find attack patterns to find success. The NIJ’s funded research into classifying firearm class and caliber from audio signals also presents a target. New classes of hardware accessories such as “smart silencers” may be developed that execute AI attacks to deceive these systems, for example by making the systems think that the gunshot came from a different gun. In an environment with AI attacks, content filters cannot be trusted to perform their job.

Once attackers have chosen an attack form that suits their needs, they must craft the input attack. The difficulty of crafting an attack is related to the types of information available to the attacker. However, it is important to note that attacks are still practical (although potentially more challenging to craft) even under very difficult and restrictive conditions. Unlike visible attacks, there is no way for humans to observe if a target has been manipulated. Input attacks trigger an AI system to malfunction by altering the input that is fed into the system. As shown in the figure below, this is done by adding an “attack pattern” to the input, such as placing tape on a stop sign at an intersection or adding small changes to a digital photo being uploaded to a social network.

Read more about Secure and Compliant AI for Governments here.

Secure and Compliant AI for Governments

What is security AI?

AI security is evolving to safeguard the AI lifecycle, insights, and data. Organizations can protect their AI systems from a variety of risks and vulnerabilities by compartmentalizing AI processes, adopting a zero-trust architecture, and using AI technologies for security advancements.

How can AI improve the economy?

AI has redefined aspects of economics and finance, enabling complete information, reduced margins of error and better market outcome predictions. In economics, price is often set based on aggregate demand and supply. However, AI systems can enable specific individual prices based on different price elasticities.

Who is responsible for AI governance?

The role of everyone in AI governance

Everyone in an organisation plays a part in AI governance. From the executive team defining the AI strategy, the developers building the AI models, to the users interacting with the AI systems, everyone has a responsibility to ensure the ethical and responsible use of AI.

© COPYRIGHT | UNIVERZITET DŽON NEZBIT

logo-footer

OSTANIMO U KONTAKTU: