Big Tech and the military: The ethical gray zone

IMAGE CREDIT:
Image credit
iStock

Big Tech and the military: The ethical gray zone

Big Tech and the military: The ethical gray zone

Subheading text
Businesses are partnering with governments to develop next-gen weapon technologies; however, Big Tech employees are resisting such partnerships.
    • Author:
    • Author name
      Quantumrun Foresight
    • January 18, 2023

    Insight summary



    Silicon Valley companies have driven significant innovations in autonomous technologies, such as self-driving vehicles and drones. Since the 2010s, these innovations have gradually filtered into the tech stacks of militaries worldwide to advance autonomous warfare capabilities. As a result, private sector tech employees and human rights organizations worldwide are calling for more transparent guidelines on how their countries’ respective militaries are adopting private sector technologies.



    Big Tech and the military context



    Technology development for the military takes enormous amounts of time and investment. The military-industrial complex, for example, typically involves a long-standing partnership between military organizations and massive defense contractors, such as Lockheed Martin and Raytheon in the US. Their projects involve developing defense assets such as autonomous military drones and aircraft carriers. 



    However, the military-industrial complex also involves partnerships with other types of private-sector companies that are not traditionally associated with the defense sector, such as Big Tech firms. As these new defense partnerships grow in number, many employees are becoming aware of contracts being signed without their knowledge. For example, in 2018, staff whistleblowers and investigative journalists made public Google’s involvement with Project Maven, a US Department of Defense (DoD) initiative that employs artificial intelligence (AI) to target drone attacks. Google employees claim that the company never asked for their consent to participate in this and other defense projects.



    Nonetheless, in 2018, Google backed out from Project Maven after a petition was signed by thousands of employees, claiming the company should not be in the business of war. Such developments have forced some tech firms to make a stand on where to draw the line. For example, following its withdrawal from Project Maven, Google issued ethical guidelines outlining its views on responsible AI research. The policy stated that the company would not be constructing AI-based weapons but did not rule out continued collaborations with the defense sector to develop non-offensive military capabilities.



    Disruptive Impact



    The involvement between Big Tech and the military can have long-term consequences for these tech businesses. In particular, many countries increasingly view traditionally neutral companies like Google or Microsoft as allies of US military organizations. Since these firms are now known to collaborate actively in the US defense sector, outside nations are becoming hesitant to continue depending on US Big Tech for software and technical support. A similar hesitance has been applied to Chinese tech giants; for example, the US, Canada, and many European countries have begun banning Huawei 5G telecommunications systems for fear of Chinese government surveillance. 



    Moreover, some liberal-minded employees working within such tech companies may feel exploited over time. In particular, Millennial and Gen Z workers tend to put a premium on ethical business practices and will choose their employers accordingly. Some of these young employees may even become whistleblowers and expose secrets, harming their respective company’s reputations. 



    Nonetheless, the US DoD is looking to become more transparent in how it conducts its research initiatives. The Department now requires third-party vendors to publish their respective AI ethical guidelines that clearly highlight how they oversee machine learning developments. Big Tech is also aggressively implementing its responsible AI initiatives. For example, Facebook established its ethics board (2018) to ensure that algorithms and new features are not biased. 



    Implications of Big Tech and the military



    Wider implications of Big Tech increasingly partnering with the defense sector may include:




    • Countries with developed tech sectors gaining a deeper military advantage over less developed peers in the future battlespace. 

    • Tech employees requiring their respective firms to be transparent on every project they are working on, including fully revealing the stakeholders.

    • Several firms, like Google, Microsoft, and Amazon, retreating from select military and law enforcement contracts.

    • Military organizations shifting to smaller tech firms and startups to avoid scrutiny.

    • Environmental, social, and governance (ESG) ratings focusing on metrics for ethics to ensure that firms are implementing AI responsibly.

    • The increasing development of algorithmic warfare capabilities, as well as autonomous drones, tanks, and underwater vehicles for wartime use.



    Questions to consider




    • If you work in a tech firm, how does your company implement responsible AI?

    • How can Big Tech and the military work together more transparently and ethically?


    Insight references

    The following popular and institutional links were referenced for this insight: