Skip to content

Whitepaper "From No-Code to Real Code" Download!

All posts

Why Companies May Resist AI Adoption

AI photo 3

Artificial Intelligence (AI) holds immense potential to transform industries and drive innovation. However, despite the numerous benefits AI can offer,
many companies exhibit resistance when it comes to adopting AI technologies. Recently Samsung tells employees not to use AI tools like ChatGPT, citing security concerns, or Apple restricts employee use of ChatGPT. This blog explores the multifaceted reasons behind this reluctance. It covers a wide range of AI adoption challenges and considerations that companies face in relation to AI implementation.

Security in AI: Addressing Particular AI Security Risks

Probably the main reason companies resist using AI is security, more specifically data confidentiality, integrity, availability, and non-repudiation. Security is usually compromised by the occurrence of adversarial attacks, where attackers exploit vulnerabilities in AI algorithms to manipulate systems and produce incorrect or malicious outputs. In some cases, attackers poison the training data used to build AI models, injecting malicious or biased data to compromise the accuracy and fairness of the models.   

AI systems often process sensitive data, making them targets for security and privacy breaches. Ensuring data confidentiality and privacy is crucial. Organizations should implement robust security measures. This includes data encryption, supply chain management, access controls, and compliance with data protection regulations.  

Organizations should keep up to date with the latest security best practices and standards specific to AI systems. Implement strong data encryption techniques to protect sensitive information. Organizations should follow established guidelines, such as those provided by the OWASP AI Security and Privacy Guide, or ISO/IEC AWI 27090 (under-development), which provides guidance on security aspects specific to AI systems.

Privacy Concerns in AI Adoption

In addition to security, privacy is a significant concern for organizations and individuals when considering the adoption of AI technologies. This concern extends to governments, large corporations, and even startups. The potential risks associated with data-centric AI and the potential for privacy breaches have raised important considerations for privacy protection.   One of the primary factors contributing to privacy concerns in AI is the vast amount of personal data gathered from the internet that AI systems require to function effectively. These systems often process and analyze sensitive information.

The misuse or mishandling of this data can lead to severe privacy infringements and potential harm to individuals. To address these concerns, privacy regulations such as the General Data Protection Regulation (GDPR) in Europe and the upcoming EU AI Act have imposed strict requirements on organizations. Furthermore, these regulations impose limitations on the collection, processing, and use of personal data, including specific provisions related to AI systems. Adhering to these regulations is crucial to ensure compliance and protect individuals’ privacy rights.  

By prioritizing privacy, complying with privacy regulations, following standards like ISO/IEC AWI 27091 (under-development), complying with regulations like GDPR, CCPA (depending on your jurisdiction) respecting user rights, and ensuring the safety of personal data, organizations can address privacy concerns associated with AI adoption. Protecting privacy not only fosters trust among users, but also upholds ethical principles and contributes to the responsible and sustainable deployment of AI technologies.

Other Concerns

In addition to security and privacy, there are several other significant concerns that companies may have when considering the adoption of AI.   



AI algorithms require continuous monitoring, updates, and adjustments to keep up with changing data patterns and evolving business needs. Furthermore, neglecting maintenance can result in decreased accuracy, performance degradation, and even system failures.

Bias and Censorship

AI systems can inadvertently perpetuate biases present in the training data, leading to discriminatory outcomes. For example, in hiring processes, if historical data exhibits gender or racial biases, the AI system may learn and replicate these biases, disadvantaging certain candidates. Censorship concerns arise in content moderation, where AI algorithms may inadvertently or intentionally suppress certain viewpoints, limiting freedom of expression. Examples include social media platforms filtering or blocking content based on political ideologies (e.g., Twitter) or controversial topics. Addressing bias and censorship requires comprehensive approaches, such as inclusive training data, transparent algorithms, and ongoing evaluation. This ensures fairness, accountability, and the protection of fundamental rights.


AI systems capable of understanding and responding to human emotions raise concerns about the ethical and responsible use of such technologies. There are questions about how emotions are interpreted and the potential impact on user experience.


The ability of AI systems to generate creative content, such as art or music, raises questions about the originality and ownership of the generated works. Furthermore, companies may be concerned about Copyright issues, attribution, and the potential devaluation of human creativity and craftsmanship.


AI systems often operate as black boxes, making it challenging to understand how they arrive at decisions or recommendations. Lack of transparency raises concerns about accountability, fairness, and the potential for bias in decision-making processes.


The rapid advancement of AI technology necessitates constant evaluation of its proportionality. Furthermore, examining the implications of AI in law enforcement, surveillance, and facial recognition underscores the importance of assessing its deployment. Additionally, considering the potential risks to human rights, a careful weighing of the benefits and drawbacks of AI becomes imperative. Consequently, to ensure the ethical use of AI, maintaining a delicate balance between its advantages and the potential harm it may cause is essential.

Cost and Resources

Implementing AI technologies can be resource-intensive, requiring substantial investments in infrastructure, training data, computing power, and skilled personnel. Companies may be hesitant due to concerns about the costs involved and the availability of necessary resources.

Lack of Understanding

A lack of understanding about AI technologies, their capabilities, and potential benefits can contribute to companies’ resistance. Furthermore, it is crucial to address the knowledge gap through education and awareness initiatives to foster a better understanding of AI and its potential value.

Fear of Job Displacement

The fear that AI technologies may replace human jobs is a common concern. Companies need to carefully consider the impact on their workforce, reskilling and upskilling opportunities, and the potential for AI to augment human capabilities rather than replace them.

Integration Challenges

Integrating AI systems into existing infrastructure and workflows can be complex and challenging. Compatibility issues, data integration, and interoperability concerns may hinder the seamless integration of AI technologies into existing business processes.

Cultural Resistance

Cultural factors can influence the adoption of AI technologies. Furthermore, resistance to change and lack of acceptance within specific industries or regions can impede the widespread adoption of AI.


Addressing these concerns requires a multi-faceted approach that includes:  

  • Robust maintenance protocols, sufficient resources for ongoing monitoring, and talent capable of managing and enhancing AI systems over time.
  • Robust governance frameworks and policies to ensure ethical and responsible AI use.
  • Building transparency and explainability into AI systems to gain user trust and mitigate biases.
  • Regular auditing and evaluation of AI systems to ensure accuracy, fairness, and compliance with legal and ethical standards.
  • Collaboration between different stakeholders, including industry, academia, and policymakers, to establish guidelines, standards, and best practices for AI adoption.
  • Awareness and training initiatives to address the lack of understanding and facilitate informed decision-making regarding AI technologies.
  • Engaging in open dialogue and addressing cultural and social concerns associated with AI adoption.


In this blog, we have examined the various reasons why companies may resist the adoption of AI technologies. From concerns about maintainability to issues surrounding security, there are several factors that contribute to this resistance. By understanding and addressing these concerns, organizations can pave the way for responsible and successful AI adoption.  

In the next blog, we will delve deeper into the specific concerns and suggestions regarding security and privacy in AI adoption. We will explore the challenges that organizations may face in terms of security and privacy when implementing AI systems. Additionally, we will provide practical suggestions and best practices to address these concerns and ensure robust security and privacy measures in AI deployments.  


How can NuBinary help?

NuBinary can help your company with AI strategy and through the entire software development lifecycle. Our CTOs are capable of figuring out the best technology strategy for your company. We’ve done it several times in the past, on a big range of different startups, and have all the necessary knowledge and tools to keep your development needs on track, either through outsourcing or hiring internal teams.


Contact us at for more information or book a meeting to meet with our CTOs here.