data breaches
Nationwide Network Outage
Verizon experienced a significant network outage on September 30, 2024, which impacted a large number of users across the United States, particularly on the East Coast, including major cities like New York City, Washington, D.C., and Atlanta. Many customers reported that their phones were stuck in “SOS mode,” meaning they could not connect to the network for regular service but could still make emergency calls. Verizon did not immediately disclose the exact cause of the outage, but the company’s spokesperson indicated that their engineering team was working to identify and resolve the issue quickly. The service disruption was noted as widespread, with data from Downdetector showing over 100,000 reports of service problems at the peak of the incident(Fox Business).
What Caused the Verizon Network Outage?
As of now, Verizon has not provided a specific reason behind the network outage, which has left customers frustrated. Network outages can happen due to a variety of reasons including hardware failures, issues with software or updates, fiber cuts, or even severe weather conditions affecting infrastructure. The fact that Verizon’s engineering team is still working on the resolution implies that the cause could be complex, potentially involving multiple layers of the network. However, without an official statement, the exact root cause remains speculative.
What Does SOS Mean?
“SOS” on a mobile phone refers to a state where the device is not connected to its usual network but can still access another available network for emergency purposes. In the context of the Verizon outage, many customers reported seeing “SOS” or “SOS only” on their phones, indicating that while they couldn’t make regular calls or use mobile data, their devices could still connect to other networks for emergency calls. According to Apple, this “SOS” indicator appears when the phone has lost its normal signal, but emergency services are still accessible through roaming capabilities on other networks(Fox Business).
In cases of network disruption, seeing “SOS” ensures that users can still access essential services even if their primary carrier is experiencing problems.
1. Restart Your Device
Sometimes, simply restarting your phone can help re-establish a connection with Verizon’s network. Turn off your phone, wait for about 30 seconds, and then turn it back on.
2. Toggle Airplane Mode On and Off
Another quick solution is to turn on airplane mode for about 10-15 seconds and then turn it off again. This can help your device reset its connection to the network.
3. Check Network Settings
- Ensure that mobile data is enabled.
- Check if your phone is set to the correct network mode (e.g., LTE/4G or 5G).
- Make sure that there are no specific restrictions or incorrect settings applied that could prevent your device from accessing Verizon’s network.
4. Update Carrier Settings
Your phone may require an update to the carrier settings, which can often be found under “Settings” > “General” > “About” (for iPhones) or under “Settings” > “About Phone” > “System Updates” (for Android devices). Make sure that these are updated, as outdated settings can affect connectivity.
5. Reset Network Settings
If the problem persists, you may want to try resetting your network settings. This can often resolve underlying configuration issues. Be cautious as this will erase all saved Wi-Fi networks, paired Bluetooth devices, and other network-related settings.
- For iPhone: Go to “Settings” > “General” > “Reset” > “Reset Network Settings.”
- For Android: Go to “Settings” > “System” > “Reset Options” > “Reset Wi-Fi, mobile & Bluetooth.”
6. Check for Verizon Outages
Sometimes the problem is not with your phone but with Verizon’s network itself. You can check for service outages in your area:
- Visit websites like Downdetector to see if other people in your area are reporting similar issues.
- You can also visit Verizon’s official website or their social media accounts for updates.
7. Reinsert SIM Card
Remove your SIM card, inspect it for damage, and then reinsert it. This can help the phone reconnect to Verizon’s network, particularly if the SIM card was not properly seated.
8. Contact Verizon Support
If none of these methods work, your best option may be to contact Verizon customer support directly. They can check your account for potential issues, help with troubleshooting, and escalate the problem if necessary.
Temporary Fixes
If your phone remains in SOS mode:
- Emergency Calls: Remember that even in SOS mode, your phone can still place emergency calls. If you need to contact emergency services, your device should still be able to connect to an available network.
Most issues like these are temporary, particularly during major outages, and should resolve once Verizon addresses the network disruptions.
data breaches
Cloudflare Outage Disrupts Global Internet — Company Restores Services After Major Traffic Spike
November 18, 2025 — MAG212NEWS
A significant outage at Cloudflare, one of the world’s leading internet infrastructure providers, caused widespread disruptions across major websites and online services on Tuesday. The incident, which began mid-morning GMT, temporarily affected access to platforms including ChatGPT, X (formerly Twitter), and numerous business, government, and educational services that rely on Cloudflare’s network.
According to Cloudflare, the outage was triggered by a sudden spike in “unusual traffic” flowing into one of its core services. The surge caused internal components to return 500-series error messages, leaving users unable to access services across regions in Europe, the Middle East, Asia, and North America.
Impact Across the Web
Because Cloudflare provides DNS, CDN, DDoS mitigation, and security services for millions of domains — powering an estimated 20% of global web traffic — the outage had swift and wide-reaching effects.
Users reported:
- Website loading failures
- “Internal Server Error” and “Bad Gateway” messages
- Slowdowns on major social platforms
- Inaccessibility of online tools, APIs, and third-party authentication services
The outage also briefly disrupted Cloudflare’s own customer-support portal, highlighting the interconnected nature of the company’s service ecosystem.
Cloudflare’s Response and Restoration
Cloudflare responded within minutes, publishing updates on its official status page and confirming that engineering teams were investigating the anomaly.
The company took the following steps to restore operations:
1. Rapid Detection and Acknowledgement
Cloudflare engineers identified elevated error rates tied to an internal service degradation. Public communications were issued to confirm the outage and reassure customers.
2. Isolating the Affected Systems
To contain the disruption, Cloudflare temporarily disabled or modified specific services in impacted regions. Notably, the company deactivated its WARP secure-connection service for users in London to stabilize network behavior while the fix was deployed.
3. Implementing Targeted Fixes
Technical teams rolled out configuration changes to Cloudflare Access and WARP, which successfully reduced error rates and restored normal traffic flow. Services were gradually re-enabled once systems were verified stable.
4. Ongoing Root-Cause Investigation
While the unusual-traffic spike remains the confirmed trigger, Cloudflare stated that a full internal analysis is underway to determine the exact source and prevent a recurrence.
By early afternoon UTC, Cloudflare confirmed that systems had returned to pre-incident performance levels, and affected services worldwide began functioning normally.
Why This Matters
Tuesday’s outage underscores a critical truth about modern internet architecture: a handful of infrastructure companies underpin a massive portion of global online activity. When one of them experiences instability — even briefly — the ripple effects are immediate and worldwide.
For businesses, schools, governments, and content creators, the incident is a reminder of the importance of:
- Redundant DNS/CDN providers
- Disaster-recovery and failover plans
- Clear communication protocols during service outages
- Vendor-dependency risk assessments
Cloudflare emphasized that no evidence currently points to a cyberattack, though the nature of the traffic spike remains under investigation.
Looking Ahead
As Cloudflare completes its post-incident review, the company is expected to provide a detailed breakdown of the technical root cause and outline steps to harden its infrastructure. Given Cloudflare’s central role in global internet stability, analysts say the findings will be watched closely by governments, cybersecurity professionals, and enterprise clients.
For now, services are restored — but the outage serves as a powerful reminder of how interconnected and vulnerable the global web can be.
data breaches
Cloudflare Outage Analysis: Systemic Failure in Edge Challenge Mechanism Halts Global Traffic
SAN FRANCISCO, CA — A widespread disruption across major internet services, including AI platform ChatGPT and social media giant X (formerly Twitter), has drawn critical attention to the stability of core internet infrastructure. The cause traces back to a major service degradation at Cloudflare, the dominant content delivery network (CDN) and DDoS mitigation provider. Users attempting to access affected sites were met with an opaque, yet telling, error message: “Please unblock challenges.cloudflare.com to proceed.”
This incident was not a simple server crash but a systemic failure within the crucial Web Application Firewall (WAF) and bot management pipeline, resulting in a cascade of HTTP 5xx errors that effectively severed client-server connections for legitimate users.
The Mechanism of Failure: challenges.cloudflare.com
The error message observed globally points directly to a malfunction in Cloudflare’s automated challenge system. The subdomain challenges.cloudflare.com is central to the company’s security and bot defense strategy, acting as an intermediate validation step for traffic suspected of being malicious (bots, scrapers, or DDoS attacks).
This validation typically involves:
- Browser Integrity Check (BIC): A non-invasive test ensuring the client browser is legitimate.
- Managed Challenge: A dynamic, non-interactive proof-of-work check.
- Interactive Challenge (CAPTCHA): A final, user-facing verification mechanism.
In a healthy system, a user passing through Cloudflare’s edge network is either immediately granted access or temporarily routed to this challenge page for verification.
During the outage, however, the Challenge Logic itself appears to have failed at the edge of Cloudflare’s network. When the system was invoked (likely due to high load or a misconfiguration), the expected security response—a functional challenge page—returned an internal server error (a 500-level status code). This meant:
- The Request Loop: Legitimate traffic was correctly flagged for a challenge, but the server hosting the challenge mechanism failed to process or render the page correctly.
- The
HTTP 500Cascade: Instead of displaying the challenge, the Cloudflare edge server returned a “500 Internal Server Error” to the client, sometimes obfuscated by the text prompt to “unblock” the challenges domain. This effectively created a dead end, blocking authenticated users from proceeding to the origin server (e.g., OpenAI’s backend for ChatGPT).
Technical Impact on Global Services
The fallout underscored the concentration risk inherent in modern web architecture. As a reverse proxy, Cloudflare sits between the end-user and the origin server for a vast percentage of the internet.
For services like ChatGPT, which rely heavily on fast, secure, and authenticated API calls and constant data exchange, the WAF failure introduced severe latency and outright connection refusal. A failure in Cloudflare’s global network meant that fundamental features such as DNS resolution, TLS termination, and request routing were compromised, leading to:
- API Timeouts: Applications utilizing Cloudflare’s API for configuration or deployment experienced critical failures.
- Widespread Service Degradation: The systemic 5xx errors at the L7 (Application Layer) caused services to appear “down,” even if the underlying compute resources and databases of the origin servers remained fully operational.
Cloudflare’s official status updates confirmed they were investigating an issue impacting “multiple customers: Widespread 500 errors, Cloudflare Dashboard and API also failing.” While the exact trigger was later traced to an internal platform issue (in some historical Cloudflare incidents, this has been a BGP routing error or a misconfigured firewall rule pushed globally), the user-facing symptom highlighted the fragility of relying on a single third-party for security and content delivery on a global scale.
Mitigation and the Single Point of Failure
While Cloudflare teams worked to roll back configuration changes and isolate the fault domain, the incident renews discussion on the “single point of failure” doctrine. When a critical intermediary layer—responsible for security, routing, and caching—experiences a core logic failure, the entire digital economy resting on it is exposed.
Engineers and site reliability teams are now expected to further scrutinize multi-CDN and multi-cloud strategies, ensuring that critical application traffic paths are not entirely dependent on a single third-party’s edge infrastructure, a practice often challenging due to cost and operational complexity. The “unblock challenges” error serves as a stark reminder of the technical chasm between a user’s browser and the complex, interconnected security apparatus that underpins the modern web.
data breaches
Manufacturing Software at Risk from CVE-2025-5086 Exploit
Dassault Systèmes patches severe vulnerability in Apriso manufacturing software that could let attackers bypass authentication and compromise factories worldwide.
A newly disclosed flaw, tracked as CVE-2025-5086, poses a major security risk to manufacturers using Dassault Systèmes’ DELMIA Apriso platform. The bug could allow unauthenticated attackers to seize control of production environments, prompting urgent patching from the vendor and warnings from cybersecurity experts.
A critical vulnerability in DELMIA Apriso, a manufacturing execution system used by global industries, could let hackers bypass authentication and gain full access to sensitive production data, according to a security advisory published this week.
Dassault Systèmes confirmed the flaw, designated CVE-2025-5086, affects multiple versions of Apriso and scored 9.8 on the CVSS scale, placing it in the “critical” category. Researchers said the issue stems from improper authentication handling that allows remote attackers to execute privileged actions without valid credentials.
The company has released security updates and urged immediate deployment, warning that unpatched systems could become prime targets for industrial espionage or sabotage. The flaw is particularly alarming because Apriso integrates with enterprise resource planning (ERP), supply chain, and industrial control systems, giving attackers a potential foothold in critical infrastructure.
- “This is the kind of vulnerability that keeps CISOs awake at night,” said Maria Lopez, industrial cybersecurity analyst at Kaspersky ICS CERT. “If exploited, it could shut down production lines or manipulate output, creating enormous financial and safety risks.”
- “Manufacturing software has historically lagged behind IT security practices, making these flaws highly attractive to threat actors,” noted James Patel, senior researcher at SANS Institute.
- El Mostafa Ouchen, cybersecurity author, told MAG212News: “This case shows why manufacturing execution systems must adopt zero-trust principles. Attackers know that compromising production software can ripple across supply chains and economies.”
- “We are actively working with customers and partners to ensure systems are secured,” Dassault Systèmes said in a statement. “Patches and mitigations have been released, and we strongly recommend immediate updates.”
Technical Analysis
The flaw resides in Apriso’s authentication module. Improper input validation in login requests allows attackers to bypass session verification, enabling arbitrary code execution with administrative privileges. Successful exploitation could:
- Access or modify production databases.
- Inject malicious instructions into factory automation workflows.
- Escalate attacks into connected ERP and PLM systems.
Mitigations include applying vendor patches, segmenting Apriso servers from external networks, enforcing MFA on supporting infrastructure, and monitoring for abnormal authentication attempts.
Impact & Response
Organizations in automotive, aerospace, and logistics sectors are particularly exposed. Exploited at scale, the vulnerability could cause production delays, supply chain disruptions, and theft of intellectual property. Security teams are advised to scan their environments, apply updates, and coordinate incident response planning.
Background
This disclosure follows a string of high-severity flaws in industrial and operational technology (OT) software, including vulnerabilities in Siemens’ TIA Portal and Rockwell Automation controllers. Experts warn that adversaries—ranging from ransomware gangs to state-sponsored groups—are increasingly focusing on OT targets due to their high-value disruption potential.
Conclusion
The CVE-2025-5086 flaw underscores the urgency for manufacturers to prioritize cybersecurity in factory software. As digital transformation accelerates, securing industrial platforms like Apriso will be critical to ensuring business continuity and protecting global supply chains.