Ransomware and Other Cyberattacks on Cities: What to Do Before, During, and After

Recently, Foster City declared a local state of emergency after a ransomware-related cyberattack forced the city to take its network offline, disrupting email and phones and limiting services while investigators and outside specialists worked on the incident. Bay Area communities like Oakland, Hayward, and St. Helena have faced similar attacks in recent years, reinforcing a hard reality for municipalities: cities are attractive targets because they run essential services, store sensitive resident and employee data, and depend on a wide ecosystem of vendors and interconnected systems.

Ransomware is only one threat category. Cities also contend with account takeover attempts such as password-spray attacks, business email compromise, data theft and extortion, and other forms of disruptive intrusion. The common thread is that outcomes improve dramatically when leaders focus on three phases: reduce exposure before an incident, contain and coordinate quickly during an incident, and recover cleanly while hardening controls afterward.

Before an attack: reduce exposure and limit blast radius

1) Close the front door (identity + access)

Most ransomware incidents begin with predictable weaknesses, and identity is at the top of the list. A city should require multi-factor authentication across email, remote access, and all administrative accounts, while separating privileged accounts from everyday use and limiting who has them. It also needs practical visibility into risky sign-ins, including abnormal login patterns and repeated failures across many accounts, because early detection is often the difference between a blocked attempt and a widespread outage.

This is also why continuous monitoring matters. Password-spray attacks are a common precursor to larger incidents and stopping them depends on catching patterns that may not look severe on a single user account. Early detection and response can stop a password-spray attack before it escalates into broader compromise.

2) Make containment possible (segmentation + endpoint control)

Prevention is not only about keeping attackers out. It is also about limiting what happens if they get in. IT should be able to isolate a department or critical system within minutes, not hours, and endpoint detection and response must be deployed across endpoints and servers and centrally managed. That technical readiness should be matched with an after-hours escalation path and clear decision authority, so there is no confusion about who can take systems offline, who coordinates communications, and who contacts insurance and legal counsel when time is tight.

3) Make recovery real (backups + restore testing)

A ransomware event turns into a prolonged outage when recovery plans are theoretical. Offline or immutable backups should exist and restore tests should be performed often enough that leadership can trust the results. Cities also benefit from having a documented “minimum viable services” restoration order, because functions like finance and payroll, permitting, and public safety support become urgent quickly during an outage.
If your team does not have a crisp first-24-hours plan written down, a practical starting point is the Ransomware Incident Response Checklist.

4) Reduce vendor and data risk (third parties + data mapping)

Municipal environments typically have broad third-party access, and many systems that store sensitive information. Cities should maintain a current view of which vendors have network access, how that access is secured, and what data those vendors can reach. In parallel, keeping an up-to-date map of sensitive resident and employee data supports faster scoping, cleaner containment decisions, and more accurate notification planning if an incident escalates.

During an attack: contain quickly, preserve evidence, and keep options open

1) Contain first (minutes–hours)

When ransomware strikes, the first hours are decisive. Suspected systems should be isolated to stop lateral movement, suspicious accounts disabled, privileged credentials reset, and multi-factor enforcement verified. The goal is to stop spread quickly while keeping unaffected services stable wherever possible.

2) Preserve evidence and coordinate early (legal + insurance + forensics)

Cities frequently lose time when evidence is overwritten, or key stakeholders are looped in too late. Logs and forensic data should be captured before wiping or reimaging systems, and legal counsel and cyber insurance should be engaged early, so privilege, coverage, and vendor engagement are coordinated correctly.

3) Stand up incident command (decision rights + communications)

One incident lead should coordinate a tight decision group that includes IT, leadership, legal, finance, and communications. That group should establish internal communication channels, set up a public update cadence for residents and vendors, and align on decision rights for service shutdowns, restoration steps, and external notifications.

4) Validate recovery paths before restoring (clean backups + restore order)

Restoration should be based on verified recovery options, not urgency alone. Teams should confirm backups are clean and restorable before beginning large-scale recovery, then restore in a safe, documented sequence that prioritizes minimum viable services.

Foster City’s experience illustrates what “taking the network offline” can mean in practice, including disruptions to email and phones and limitations on services while essential emergency functions continue.

After an attack: restore trust and reduce repeat risk

1) Rebuild trust in the environment (credentials + controls + monitoring)

Recovery is not complete when systems come back online. It is complete when systems can be trusted. After an incident, cities should reset and rotate credentials broadly as appropriate, review and reduce privileged access, validate endpoint coverage and patch levels, and ensure segmentation and monitoring are functioning as designed before reopening access widely.

2) Confirm scope and obligations (data access + notifications)

Post-incident work should include confirming whether data was accessed or exfiltrated and coordinating any legal, regulatory, and resident communications obligations with counsel. This is also the stage where third-party exposure and downstream impacts should be assessed with vendors.

3) Turn lessons into control upgrades (root cause + resilience)

The most important “after” step is reducing repeat risk. That means resolving the root cause, whether it was identity gaps, exposed remote access, patching failures, or overly broad vendor access. It also means improving logging and monitoring, revisiting backup immutability, and increasing restore test cadence, so the city is better prepared for the next attempt.

Practical next steps

If you want a straightforward way to answer, “Are we exposed, and would we know fast enough,” start with the Ransomware Incident Response Checklist. If you want to operationalize prevention and detection beyond checklists, managed cybersecurity services may be the better fit. And for a real example of how proactive monitoring can stop a common attack pattern early, review this password-spray success story.

Ready to learn more? Get the latest Xantrion news and IT tips.

Menu
dialpad