
TikTok CEO Shou Zi Chew, a Harvard Business School alumnus, is navigating a high-stakes deal to divest or cease the platform's U.S. operations, valued at $50 billion. This comes after the Supreme Court ordered TikTok to either divest or shut down its U.S. operations by January 19, 2025, citing concerns over the app's handling of user data and exposure of sensitive information.
The ruling was a culmination of growing worries about TikTok's data harvesting practices and its potential impact on national security. With 170 million American users, the platform's future in the U.S. hangs by a thread. President Donald Trump recently signed an executive order delaying the ban by 75 days, providing a temporary reprieve for TikTok.
As Chew navigates this complex situation, potential buyers have emerged, including billionaire Elon Musk, who has expressed interest in acquiring the platform. However, the situation remains fluid, with the U.S. government and TikTok's parent company, ByteDance, engaged in ongoing discussions about the platform's future.
Data Exposure Concerns
TikTok's data mining practices have raised significant concerns, particularly regarding its access to information from non-consenting users. Supreme Court Associate Justice Neil M. Gorsuch said TikTok can access a vast amount of personal data, including names, photos, and other sensitive information, from a consenting user's contact list. These concerns have led to increased scrutiny of TikTok's operations and are further complicated by the company's ownership structure, which is headquartered in Beijing and has ties to the Chinese Communist Party (CCP).
Republican U.S. Representative Cathy McMorris Rodgers has been vocal about these concerns, stating that “TikTok has repeatedly been caught in the lie that it does not answer to the CCP through ByteDance.” She also said Chinese companies are required to grant the CCP access and manipulation capabilities as a design feature.
TikTok's impressive revenue growth has been a significant contributor to ByteDance's success. In 2023, TikTok generated $16 billion in revenue, a 67% increase from the previous year, accounting for 15% of ByteDance's $100 billion revenue. Despite this success, TikTok has faced significant challenges in the U.S. market. The app's data handling practices have raised concerns, with the Supreme Court citing issues with data harvesting and exposure of sensitive information. This led to a brief ban on January 18, 2025, which was quickly reversed by President Trump's executive order.
By April 2025, TikTok's fate in the United States will be decided, as President Trump's 75-day ban extension is set to lapse. The company must either divest or cease its U.S. operations unless a deal is reached. This development comes after President Joe Biden expanded Trump's 2019 Executive Order 13859 to promote American AI leadership while enforcing restraints to ensure trust, safety, and human-centricity. However, President Trump rescinded Biden's Executive Order 14110, which aimed to enforce safe and trustworthy development of AI. John Villasenor, Director of the UCLA Institute for Technology, Law & Policy, said a more hands-off approach to AI regulation is expected under Trump's administration, moving away from the Biden White House's "fear-based" narrative. As regulators and business leaders weigh the tradeoffs between AI leadership, responsibility, and macroeconomic gains, the tech industry stands at a critical crossroads of regulatory compliance and innovation.
The Safety And Profit Trade Off
The acquisition of TikTok, which has over 170 million American users, is a complex issue. Even if a buyer is found, it will still be responsible for redesigning TikTok's platform capabilities, features, and functionality, including its artificial and machine learning models.
Following Trump's $500 billion private sector AI infrastructure investment announcement, AI adoption is expected to expand in the private and public sectors. The AI industry has seen significant growth, with AI startups reaching over 45% of the total $209 billion in funds raised in 2024. Elon Musk's xAI led the charge with $12 billion, followed by OpenAI at $6.6 billion. As AI continues to capture investors' hearts and imaginations, venture founders will increasingly explore opportunities to integrate AI-enabled features in products and services.
To achieve President Trump's American AI leadership mandate, the U.S. can draw inspiration from the EU's AI Act, which prioritizes safety, security, and trustworthiness. The EU's approach includes prohibiting AI design features that manipulate users, exploit demographic vulnerabilities, or use biometric categorization to infer identity. In the U.S., states like California, Colorado, and Northern Virginia are leading the charge in providing regulatory guidance for responsible AI development. However, the U.S. tech industry must comply with data privacy, trust, and safety regulations for billions of users worldwide.
The EU's AI Act also regulates machine learning training data extraction and collection methods, restricting web scraping in EU states like the Netherlands to non-commercial use only. As EU states expand GDPR requirements, companies like TikTok must explore ethical and authorized methods to use and access model training data.To foster innovation while ensuring safety and security, the EU has introduced AI regulatory sandboxes. These controlled environments allow AI systems to be developed, tested, and validated before being released to the market. The U.S. can consider similar initiatives to support responsible AI development. Ultimately, achieving President Trump's American AI leadership mandate requires a balanced approach that promotes innovation while ensuring safety, security, and trustworthiness.
Adopting AI Responsibly
While TikTok's future in the U.S. remains uncertain, the Trump administration is reviewing the video-sharing app's data collection capabilities to mitigate AI adoption risks. As the White House navigates a regulatory framework to achieve American AI leadership, TikTok and other tech companies are exploring ways to implement responsible AI practices. This includes conducting AI responsibility assessments and implementing controls to prevent subliminal manipulation and biometric data exploitation risks. To address these concerns, companies like TikTok may need to implement guardrails, such as transparency and explainability measures, to ensure their AI systems are safe and trustworthy.
We, the people of the United States, have a profound responsibility to the millions of Americans and billions of users worldwide who rely on technology products, services, and platforms. With 170 million active monthly users across America, we must remain accountable to those who trust us with their digital lives. As we strive to realize American AI leadership, we must recognize that freedom comes with great responsibility. We must ensure that our pursuit of innovation and progress is balanced with the need to protect the rights, safety, and well-being of all people who use technology every day to connect, imagine, and achieve.

Ibe Imo (Journalism ’25) is a Harvard Master of Journalism candidate, storyteller, and technology professional. He is a fitness enthusiast and enjoys outdoor activities, including hiking and trail cycling.
Comments