Google Strengthens Android Privacy Policies and Intensifies Fight Against Malvertising with AI-Powered Defenses

Google has unveiled a comprehensive suite of policy updates for the Android ecosystem and the Play Store, significantly bolstering user privacy and fortifying defenses against fraudulent activities. These changes coincide with the revelation that in 2025 alone, Google successfully blocked or removed an astounding 8.3 billion ads globally and suspended 24.9 million advertiser accounts, underscoring the relentless battle against malicious actors in the digital advertising landscape. The new policies, particularly concerning contact and location permissions, are designed to grant users more granular control over their data, while a new secure app ownership transfer feature aims to protect businesses from fraud. Concurrently, the tech giant is leveraging its advanced artificial intelligence model, Gemini, to detect and preemptively block an increasing volume of policy-violating ads.
Enhancing User Privacy Through Granular Permissions
A cornerstone of Google’s latest policy updates revolves around how third-party applications access sensitive user data, specifically contact lists and location information. These revisions mark a significant step towards a more privacy-centric Android experience, moving away from broad permissions to a model of explicit, limited, and user-controlled access. The updates are slated to roll out with Android 17, currently in beta, and will progressively become mandatory for developers.
The Introduction of the Contact Picker
One of the most impactful changes is the introduction of a new Contact Picker. This feature provides a standardized, secure, and searchable interface for contact selection, fundamentally altering how apps interact with a user’s address book. Previously, apps requiring access to contacts often relied on the READ_CONTACTS permission. This permission was notoriously broad, granting applications unfettered access to a user’s entire contact list, including names, phone numbers, email addresses, and often other associated metadata, without the user being able to specify which contacts or specific fields within those contacts were necessary for the app’s functionality. This "all or nothing" approach posed a considerable privacy risk, as a single malicious or compromised app could potentially exfiltrate vast amounts of personal data.
With the Contact Picker, users are empowered to grant applications access only to the specific contacts they choose, aligning perfectly with Android’s stated commitment to data transparency and minimizing permission footprints. Google articulated this shift, emphasizing that it allows users to make informed decisions about what data they share. The new mechanism, introduced in Android 17, also enables apps to request only specific fields from a contact, such as just a phone number or an email address, rather than the entire contact record. This granular control dramatically reduces the potential for data overcollection and misuse.
The updated policy mandates that all applicable apps utilize the Contact Picker (or the Android Sharesheet, another secure sharing mechanism) as the primary method for accessing users’ contacts. The READ_CONTACTS permission is now reserved for a very limited set of applications that genuinely cannot function without full, ongoing access to a user’s contact list. Developers targeting Android 17 and later are strongly advised to remove the READ_CONTACTS permission entirely from their app manifest declarations unless absolutely critical. Should an app require such extensive access, developers must justify this need by submitting a Play Developer Declaration within the Play Console, undergoing a rigorous review process to ensure compliance and genuine necessity.
Streamlined Location Access and Enhanced Transparency
Parallel to the contact permission overhaul, Google is also refining how applications request and utilize location data. Location information is among the most sensitive data points, capable of revealing a user’s movements, habits, and even their home or work address. Recognizing this, Android 17 introduces a streamlined location button, enabling apps to request one-time access to a user’s precise location.
This "one-time" access model is a significant improvement over previous paradigms, where apps often requested continuous background location access without clear justification. The new button allows users to make a more nuanced choice about the duration and precision of the location information they share. For instance, a user might grant an app one-time precise location access to order a food delivery, but not continuous background access.
Furthermore, to enhance transparency and user awareness, Android 17 will feature a persistent indicator that alerts a user every time a non-system application accesses their location. This visual cue serves as a constant reminder, empowering users to revoke permissions if they detect unauthorized or unnecessary location tracking.

Developers are urged to review their apps’ location usage to ensure they are requesting the minimum amount of location data necessary for their functionality. For apps targeting Android 17 and above that use precise location for discrete, temporary actions, developers must implement the new location button by adding the onlyForLocationButton flag in their manifest. Similar to contact permissions, if an app requires persistent, precise location access for its core features, developers will need to submit a Play Developer Declaration in the Play Console, providing compelling justification for why the new one-time access button or coarse location data is insufficient.
Timeline for Policy Implementation and Developer Compliance
The transition to these new privacy-focused policies will be phased. The Android 17 beta is already available, allowing developers to begin adapting their applications. The Play Developer Declaration forms, crucial for justifying broad contact or persistent precise location access, are expected to become available before October 2026. Ahead of this, pre-review checks in the Play Console will go live starting October 27, 2026, to help developers identify potential policy issues related to contacts and location permissions before submission. This staggered approach is intended to provide developers ample time to update their applications and ensure compliance without disrupting user experience.
Securing App Ownership Transfers to Combat Fraud
Beyond individual user privacy, Google is also addressing business-level security within its ecosystem. The company is implementing a secure, native account transfer feature directly within the Play Console. This mechanism is designed to provide a trustworthy and verifiable method for businesses to transfer ownership of their applications, thereby protecting against various forms of fraud.
The importance of this feature cannot be overstated. In the burgeoning app economy, the buying and selling of apps or entire developer accounts has become common. However, unofficial transfers, often involving the sharing of login credentials or transactions on unregulated third-party marketplaces, leave businesses highly vulnerable to scams, intellectual property theft, and account hijacking. These insecure practices can lead to significant financial losses, reputational damage, and even the compromise of user data if a malicious actor gains control of an app.
Starting May 27, 2026, Google is strongly recommending, and eventually will mandate, that app developers handle all account ownership changes exclusively through this new native feature. This move aims to standardize the transfer process, introduce necessary security protocols, and ensure that only legitimate changes in ownership are recognized, effectively closing a significant loophole that bad actors have exploited in the past. By disallowing unofficial transfers, Google reinforces the integrity of the Play Store ecosystem for both developers and users.
Google’s Relentless War on Malvertising: AI at the Forefront
These proactive policy updates for the Android ecosystem arrive as Google continues its vigorous campaign against malvertising and ad fraud across its vast advertising platforms. The company’s recently published 2025 Ads Safety Report highlights significant progress, largely attributed to the sophisticated capabilities of its artificial intelligence model, Gemini.
In 2025, Google reported blocking or removing an astonishing 8.3 billion ads globally that violated its policies. This figure represents an immense scale of preventative action, indicating that the vast majority – over 99% – of policy-violating ads were intercepted by Google’s automated systems before they could even be displayed to users. This proactive blocking is critical, as malicious ads, or "malvertising," can lead to phishing scams, malware infections, unwanted redirects, and a host of other detrimental experiences for users.
The report also detailed the suspension of 24.9 million advertiser accounts in 2025. These suspensions are often the result of repeated policy violations, severe egregious offenses, or direct involvement in fraudulent schemes. The sheer volume of account suspensions underscores Google’s zero-tolerance policy for advertisers who attempt to exploit its platform for illicit gains.
The Power of Gemini AI in Combating Sophisticated Threats

A key differentiator in Google’s current approach to ad safety is the integration of Gemini, its cutting-edge AI model. Traditionally, ad moderation systems relied heavily on keyword matching and rule-based logic. While effective to a degree, these systems could be circumvented by sophisticated fraudsters who constantly evolve their tactics, using cloaking techniques, subtle linguistic shifts, or rapidly changing ad content to evade detection.
Keerat Sharma, Vice President and General Manager of Ads Privacy and Safety at Google, emphasized the transformative impact of Gemini. "Unlike earlier keyword-based systems, our latest models better understand intent, helping us spot malicious content and preemptively block it, even when it’s designed to evade detection," Sharma stated in a post shared with The Hacker News. This intent-based understanding allows Gemini to analyze the context, implied meaning, and overall objective of an ad, rather than just its surface-level keywords. This capability is particularly vital in detecting nuanced scams, deceptive practices, and rapidly emerging fraud patterns.
The scale of AI’s impact is further evidenced by specific figures: in 2025, Google removed or blocked 602 million ads and suspended 4 million accounts specifically associated with scams or scam-related activity. This includes everything from fake technical support scams to deceptive financial products and fraudulent giveaways. Beyond outright blocking, over 4.8 billion ads were restricted due to policy violations, meaning they could not be shown in certain contexts or to specific user demographics, or required additional disclaimers. Furthermore, over 480 million web pages were actioned for attempting to serve prohibited content, such as sexually explicit material, weapons promotion, illegal online gambling, and malware.
Comparison with Previous Years and the Evolving Threat Landscape
Comparing the 2025 data with previous years highlights the escalating nature of the threat landscape and Google’s increasing sophistication in combating it. In 2024, Google reported suspending over 39.2 million advertiser accounts and stopping 5.1 billion bad ads. While the number of suspended advertiser accounts decreased in 2025 (24.9 million vs. 39.2 million), the number of blocked/removed ads significantly increased (8.3 billion vs. 5.1 billion). This shift could indicate more effective proactive detection, catching malicious ads before they lead to account suspensions, or a change in the nature of the threats. Additionally, 2024 saw 9.1 billion ads restricted and ads blocked or restricted on 1.3 billion pages, compared to 4.8 billion restricted ads and 480 million pages actioned in 2025. These fluctuations reflect the dynamic nature of online fraud, with bad actors constantly adapting their methods.
The rise of generative AI presents both challenges and opportunities in this fight. While bad actors are now leveraging generative AI to create highly deceptive and convincing ads at an unprecedented scale, Google is counteracting this by deploying its own AI capabilities. By the end of 2025, the majority of Responsive Search Ads created in Google Ads were reviewed instantly, with harmful content blocked at the point of submission. Google plans to extend this real-time blocking capability to more ad formats throughout the current year, aiming to cut off malicious campaigns at their inception.
Implications for Users, Developers, and the Digital Ecosystem
These integrated efforts—both in enhancing Android’s privacy architecture and in combating malvertising—carry profound implications for all stakeholders in the digital ecosystem.
For users, the new policies translate into significantly enhanced control and transparency. The Contact Picker and streamlined location access empower individuals to make more informed and granular decisions about their personal data, reducing the risk of unwanted data collection and privacy breaches. The persistent location indicator adds another layer of security, fostering greater trust in the apps they use.
For app developers, these updates necessitate a shift in mindset and development practices. The era of requesting broad, undifferentiated permissions is drawing to a close. Developers must now adopt a "data minimization" approach, requesting only the precise data required for core app functionality. This requires careful review of existing apps, potential re-architecting of permission requests, and diligent adherence to the new declaration processes. While initially requiring effort, these changes ultimately contribute to a more trustworthy ecosystem, which can benefit reputable developers by increasing user confidence and adoption. The secure app ownership transfer feature also provides a much-needed layer of protection for businesses engaging in app transactions.
For the broader digital ecosystem, Google’s initiatives reinforce a growing global trend towards stronger data privacy regulations and user consent. These moves align with principles seen in GDPR, CCPA, and other privacy frameworks worldwide, setting a higher bar for data stewardship. The advanced use of AI in combating ad fraud also demonstrates the critical role technology plays in maintaining the integrity and safety of online platforms, particularly as malicious actors become more sophisticated.
In conclusion, Google’s latest announcements represent a multi-pronged approach to safeguarding its vast digital empire. By refining Android’s privacy controls, providing secure business tools, and deploying cutting-edge AI to counter malvertising, the company aims to foster a more transparent, secure, and trustworthy environment for billions of users and millions of developers worldwide. The ongoing evolution of these policies and technologies will be crucial in the perpetual battle against digital threats.







