LinkedIn Users Unknowingly Opted Into AI Training
LinkedIn is Using Your Data to Train AI Models by Default
Last updated:
Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
LinkedIn has quietly opted users into a new setting that allows their data to be used for training generative AI models. Users need to manually opt-out if they don't want their data used in this way. This comes right after Meta's similar announcement about data usage for AI training.
LinkedIn has recently faced scrutiny for opting users into a setting that allows their data to be used in training generative AI models without explicit consent. This default setting, which users must manually turn off if they do not wish to participate, has raised concerns about user privacy and data usage. The change was quietly introduced alongside an updated privacy policy detailing how the platform leverages user data for various purposes, including AI model training.
The updated policy states that LinkedIn may use personal data to improve its services, develop AI capabilities, and make its offerings more relevant and useful. This extends to writing assistant features and other AI-driven enhancements. Users who wish to opt-out can do so via the Data privacy tab in their account settings by toggling off the 'Data for Generative AI Improvement' option. However, this opt-out only prevents future data usage; any data already used in model training remains unaffected.
AI is evolving every day. Don't fall behind.
Join 50,000+ readers learning how to use AI in just 5 minutes daily.
Completely free, unsubscribe at any time.
Interestingly, LinkedIn has stated that its AI training does not involve data from users in the EU, EEA, or Switzerland, and claims to use privacy-enhancing technologies to redact or remove personal data from its training sets. Despite these measures, the platform's silent opt-in policy has drawn criticism, especially in light of similar practices by other tech giants like Meta, which recently admitted to scraping non-private user data for model training since 2007.
For business leaders and professionals who depend on LinkedIn for networking and career development, this revelation is a reminder to stay vigilant about privacy settings and data usage policies on platforms they use. Adjusting privacy settings and being aware of how personal data is used can help mitigate potential risks associated with unconsented data usage. Moreover, understanding the broader implications of data usage in AI development is vital for making informed decisions in today's increasingly data-driven environment.
The move by LinkedIn underscores a significant trend in the tech industry where companies prioritize AI development and data utilization, sometimes at the expense of user privacy. As businesses continue to integrate AI into their operations, the balance between innovation and confidentiality becomes ever more critical. Professionals should closely monitor how their data is being used and take proactive steps to protect their personal information while leveraging the benefits of advanced technologies.