Spiceworks Inc https://www.spiceworks.com/ Everything IT - Community, Insights, Research and Tools Fri, 13 Feb 2026 20:47:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2024/07/02084556/spiceworks-logo-icon.jpg Spiceworks Inc https://www.spiceworks.com/ 32 32 Building immutable backups without breaking your budget https://www.spiceworks.com/security/building-immutable-backups-without-breaking-your-budget/ Fri, 13 Feb 2026 20:47:41 +0000 https://www.spiceworks.com/?p=3181571 The backups were right there on the server, and the ransomware had encrypted them along with everything else. The disaster recovery plan that looked solid on paper had one critical flaw: everything lived on the same network, protected by the same credentials the attackers had already harvested. Once they had the keys, they locked every […]

The post Building immutable backups without breaking your budget appeared first on Spiceworks Inc.

]]>

The backups were right there on the server, and the ransomware had encrypted them along with everything else. The disaster recovery plan that looked solid on paper had one critical flaw: everything lived on the same network, protected by the same credentials the attackers had already harvested. Once they had the keys, they locked every door on their way out.

Immutable storage exists to prevent exactly this scenario. These are backups that cannot be altered, encrypted, or deleted for a defined retention period, not even by someone with admin credentials. If attackers can’t encrypt your backups, you have a path to recovery that doesn’t involve paying the ransom.

Until recently, enterprise-grade immutable solutions called for enterprise budgets. If you’ve priced dedicated storage arrays, specialized appliances, or per-terabyte licensing that scales with your data growth, you’ve probably concluded that immutability wasn’t for you.

That’s less true than it used to be. Cloud providers now offer object-lock features at commodity prices, backup software can create hardened repositories on standard Linux boxes, and even mid-range NAS devices support immutable snapshots. Now it’s a matter of figuring out which data actually needs this level of protection, and which approaches work when you don’t have a dedicated storage team.

Figuring out what actually needs immutable protection

Although it would be ideal to just implement immutable backups for everything in your environment, it’s probably not practical. The cost and complexity add up fast if you treat immutability as a blanket requirement rather than a risk management decision. What you’re really protecting is the data that makes recovering everything else possible. A few categories stand out.

Recovery dependencies come first. Active Directory, DNS, DHCP, and certificate stores are often prerequisites for restoring anything else, and without them, even perfect backups of other systems won’t help you. If you’re running hybrid with Entra ID, that synchronization relationship needs protection too.

Financial and operational essentials belong on the list as well. Anything with legal retention requirements, the databases and application data your business literally cannot operate without, and bare-metal recovery images all deserve immutable protection. When you’re dealing with ransomware, you need to be able to rebuild servers from scratch instead of restoring onto potentially compromised systems.

Standard backups handle the rest. Old email archives, ancient project files, data that’s easily recreated, and systems that can tolerate longer recovery times can stay where they are. Save your immutable copies for the stuff that makes recovering everything else possible.

With those priorities clear, you’ll need somewhere to put the backups themselves.

Cloud object storage with built-in immutability

If you’re already sending backups to the cloud, enabling immutability may be a configuration change away. The Spiceworks State of IT 2026 report found that 61% of businesses currently use cloud-based backup and disaster recovery, which means many of them already have access to these features without realizing it.

AWS S3 Object Lock, Azure Immutable Blob Storage, Backblaze B2, and Wasabi all offer object-lock capabilities with pay-as-you-go pricing. Once you enable object lock on a storage bucket, objects written there cannot be deleted or modified until a retention period you specify expires. An attacker who compromises your backup credentials still can’t touch the data.

Governance mode vs. compliance mode

Cloud storage services typically offer two modes. Governance mode allows designated administrators to override the lock if necessary, which provides flexibility but leaves a potential avenue for tampering. Compliance mode is stricter: no one can delete the data before retention expires, including you. That satisfies regulatory requirements and eliminates any possibility of tampering, but it also means you’re locked into storing that data for the full retention period, even if your needs change.

Watch the cost math

You avoid hardware investment entirely with this approach and pay only for the storage you use, though longer retention means more storage to pay for. Egress fees can also quickly add up when you’re recovering large volumes of data, so factor that into your disaster recovery planning.

Veeam, MSP360, Acronis, and others can write directly to S3-compatible storage with object lock enabled, and Gartner’s 2025 Magic Quadrant for Backup and Data Protection Platforms lists immutable storage integration as a mandatory feature for enterprise backup solutions. If you’re already using one of these tools, adding an immutable cloud target may be more straightforward than you expect. If you’re backing up Microsoft 365, third-party tools can send Exchange, SharePoint, and OneDrive data to immutable targets too.

Immutability capabilities you may already have

Before you start shopping for new solutions, take stock of what you already have—immutability may be closer than you think.

If you already own Veeam licenses, the hardened Linux repository approach pairs commercial backup software with a standard Linux server configured to resist tampering. The backup server connects via limited, single-purpose credentials, and the repository itself refuses delete commands. Someone comfortable with basic Linux administration can set this up without additional licensing costs.

Synology and QNAP both offer immutable snapshot features in their mid-range NAS devices, so enabling them may be a matter of configuration rather than new purchases. The snapshots become read-only for a retention period you define, and even administrator accounts can’t delete them early.

Teams comfortable managing Linux systems on commodity hardware have another option: ZFS and Btrfs file systems both offer read-only snapshot capabilities at zero licensing cost. These snapshots aren’t quite the same as true WORM storage, though. Without careful permission controls, anyone with root access can still delete them. This approach requires more hands-on management and tighter permission controls to be effective, but it works if you have the expertise.

Air-gapping when it matters most

Software-based immutability handles most ransomware scenarios, but physical separation may still play a role for your most critical recovery data.

Offline copies of bare-metal recovery images and AD backups provide a true last resort with zero attack surface when disconnected from your network. This doesn’t require tape libraries or complex rotation schemes. Even encrypted external drives stored offsite and updated monthly offer real protection when you need to rebuild from scratch.

Someone has to physically rotate the media and verify the copies are still readable. Most companies might do this on a monthly schedule, but quarterly may work if your data doesn’t change often. That adds manual effort to an already-busy workload.

One immutable copy removes an attacker’s leverage

Attackers succeed when they can hold all your data hostage—production and backups alike. One immutable copy of your most critical systems takes that option away from them. The tools are more accessible than they used to be, and you’ve probably got some of them already. A little advance planning now will put you in a far more resilient position if a ransomware attack strikes.

The post Building immutable backups without breaking your budget appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/13204527/shutterstock_479387101-600x400.jpg
What happens when AI takes over entry-level IT tasks? https://www.spiceworks.com/it-careers/what-happens-when-ai-takes-over-entry-level-it-tasks/ Thu, 12 Feb 2026 19:17:27 +0000 https://www.spiceworks.com/?p=3181565 It’s no secret that AI is transforming the way we work, but what about how we grow? As AI continues to get smarter and more capable, how is it going to change the way we develop new skills and progress in our career?PwC recently announced that it was rethinking how it trains employeesOpens a new window […]

The post What happens when AI takes over entry-level IT tasks? appeared first on Spiceworks Inc.

]]>

It’s no secret that AI is transforming the way we work, but what about how we grow? As AI continues to get smarter and more capable, how is it going to change the way we develop new skills and progress in our career?

PwC recently announced that it was rethinking how it trains employeesOpens a new window in the AI era. Instead of focusing on titles and traditional career ladders, the firm is focused on growing skills alongside technical change rather than lagging behind. Employees will have continuous learning embedded into everyday work to keep up with the rapid pace of technology.

That shift signals something important. If technology is accelerating, skill development has to accelerate with it. However, for IT professionals in particular, this can cause a bit of a dilemma. If AI replaces the repetitive, entry-level tasks that used to build foundational skills, where are those skills supposed to come from?

What happens if AI takes over entry-level?

In a recent conversation on the Spiceworks Community, we explored what happens to career growth when AI starts absorbing the repetitive tasks entry-level professionals used to rely on?

For decades, the IT career path has started the same: work at the help desk, learn the basics, and troubleshoot the simple stuff. Was it glamorous? No, but it built the foundation that you would grow your career on.

AI has changed all that as it takes over many of the repetitive tasks that entry-level professionals used to do. If AI handles the repetition, where do those foundational reps come from?

AI’s impact on career development

Repetition is how professionals internalize systems. It’s how they begin to understand how infrastructure behaves under normal conditions, which makes problems easier to spot later. When a junior engineer works through the same type of issue multiple times, they’re building a model of how things work. Instead of struggling through the problem, early-career professionals may jump straight to the solution the tool suggests, hindering their growth.

Over time, this could create a subtle shift in career growth. Professionals may advance in title and responsibility without having accumulated the same depth of hands-on exposure that previous generations did.

The modern day career advancement

If repetitive entry-level work becomes obsolete, the traditional IT career ladder starts to look different. At some point, someone still needs to reason through a problem the AI didn’t anticipate. Someone needs to recognize when a recommendation doesn’t quite make sense. Someone needs to understand why a system behaves the way it does, not just that it does.

This is where PwC’s embedded learning approach becomes relevant. If AI is woven into daily workflows, development has to be woven in as well. With AI handling the “grunt” work, we can no longer solely rely on experience to help us grow our skills, it needs to be done intentionally. For IT, this might look like more career rotation programs where junior employees get to work closely with a mentor on more advanced projects. Or it could be as simple as requiring junior employees to explain AI recommendations before implementing them.

The entry-level must evolve

AI is going to continue to get smarter and the boring, repetitive tasks will continue to get automated. However, it’s important that entry-level roles evolve alongside AI rather than get replaced entirely. If the old model was “learn by doing the same thing 100 times,” and AI now does that thing for us, we have to ask: what replaces those 100 reps?

It’s crucial that junior employees still understand the “why” and “how” between what AI is spitting out. Eventually, AI will provide an incorrect solution or not work properly, which will require the human managing it to have the knowledge to make sound judgement.

How is your organization handling upskilling alongside AI? How do you think entry-level roles will evolve? Join the conversation on the Spiceworks Community.

The post What happens when AI takes over entry-level IT tasks? appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/12170458/shutterstock_2102447011-600x400.jpg
Low-code security isn’t vibe coding, but it isn’t foolproof either https://www.spiceworks.com/security/low-code-security-isnt-vibe-coding-but-it-isnt-foolproof-either/ Thu, 12 Feb 2026 13:51:26 +0000 https://www.spiceworks.com/?p=3181563 Low-code platforms occupy awkward middle ground for IT. They’re not the Wild West of vibe coding, where AI generates raw code that nobody understands and no vendor supports. But they’re not traditional vendor software either, where security is entirely someone else’s problem.Power Platform, Salesforce, and ServiceNow ship with real guardrails, so it’s not surprising that […]

The post Low-code security isn’t vibe coding, but it isn’t foolproof either appeared first on Spiceworks Inc.

]]>

Low-code platforms occupy awkward middle ground for IT. They’re not the Wild West of vibe coding, where AI generates raw code that nobody understands and no vendor supports. But they’re not traditional vendor software either, where security is entirely someone else’s problem.

Power Platform, Salesforce, and ServiceNow ship with real guardrails, so it’s not surprising that IT feels more comfortable when citizen developers build on sanctioned platforms like them. The trouble is that comfort can obscure a different kind of risk—the configuration choices individual builders make may happen outside IT’s direct line of sight.

Gartner forecasts that by 2026, 80% of low-code usersOpens a new window won’t be in IT. That’s a lot of people making configuration choices on platforms IT selected but can’t realistically monitor app-by-app. The platform handles the hard security problems, but it can’t stop someone from connecting to a database with their personal credentials or granting broader permissions than the app actually needs.

What makes low-code different from vibe coding

Low-code and no-code platforms give you visual development environments with drag-and-drop interfaces, pre-built components, and integrations that would take months to build from scratch. IT typically selects these platforms, configures them to meet organizational security requirements, and oversees them from there. When something breaks, there’s a vendor to call and documentation to reference.

AI coding tools generate raw code from natural language descriptions, and the person who built the app often can’t explain how it works because they described what they wanted and an AI made it happen. There’s no vendor security team patching vulnerabilities, no built-in governance, and nobody on call when it breaks at 2 a.m.

Low-code platforms abstract security into the platform layer, so authentication, access controls, and audit logging come pre-configured rather than hand-coded. Vibe-coded outputs need all the same security scrutiny as any custom development, but the people building them rarely have the expertise to provide it. If Power Platform apps worry you less than whatever ChatGPT just generated for the sales team, your instincts are solid—they just need to account for the whole picture.

What low-code platform guardrails actually protect

Enterprise low-code platforms do reduce risk compared to letting users write raw code or vibe-code solutions. Authentication infrastructure gets handled at the platform level, integrating with your existing identity provider so you’re not trusting each citizen developer to figure out OAuth flows on their own. Role-based access controls become configuration clicks rather than custom code that might have holes in it, and audit logging happens whether anyone remembers to implement it or not.

Microsoft, for example, publishes detailed documentation on how Power Platform addresses the top 10 web application risks from the Open Web Application Security Project (OWASP)Opens a new window . That includes clickjacking prevention, Content Security Policy (CSP) support, and “Default Deny” design principles. Salesforce offers Security Health Check to continuously monitor configurations against baseline standards, plus Shield for encryption and event monitoring if you’re in a regulated industry. These are real security investments that most citizen developers couldn’t replicate if they tried.

Where low-code platform guardrails fall short

Say your marketing director builds a customer intake form in Power Apps, connects it to Dynamics using her own credentials because that’s the fastest way to get it working, and deploys it to the team without mentioning it to anyone in IT. Now every user who submits a form is implicitly operating with her access level, and nobody can tell the difference between legitimate usage and something suspicious. The platform didn’t stop her because it can’t enforce dedicated service accounts if the builder never configured them.

OWASP has cataloged these gaps in the Citizen Development Top 10Opens a new window . Account impersonation, like the scenario above, is near the top of the list. Excessive permissions come next.

Developers grant broad access during development because they just want the thing to work, but they may not always tighten controls before deployment. Data exposure happens because platforms handle authentication but they might not automatically prevent builders from exposing sensitive information through misconfigured sharing settings or poorly designed data flows.

Research published in June 2025 revealed five Common Vulnerabilities and Exposures (CVEs)Opens a new window in Salesforce Industry Clouds related to FlexCards and Data Mappers, components where seemingly minor configuration decisions bypassed field-level security entirely. Salesforce has since addressed these issues, but the underlying pattern remains instructive. The vulnerabilities weren’t exotic attack vectors requiring nation-state resources. They were the kind of mistakes any citizen developer could make without realizing the potential consequences.

Static and dynamic application security testing designed for conventional code usually can’t inspect low-code outputs at all. These platforms are easy to use because they hide complexity, and that complexity stays hidden from your security tools too. You trust the platform because you chose the platform, but that trust doesn’t automatically extend to every app built on it.

When low-code outputs still need review

The Spiceworks 2026 State of IT report confirms what you’re probably already seeing at your company. People are adopting AI tools faster than governance can keep up. That gap between capability and oversight applies to low-code as well as vibe coding. Platform guardrails don’t substitute for governance—they just make it easier to implement.

You can’t review everything, though, especially if you’re a one-person shop or a small team already stretched thin. A practical filter helps. If the app connects to a system of record like your enterprise resource planning (ERP), CRM, or financial systems, it deserves scrutiny. Apps handling customer or employee data that would trigger a breach notification warrant a closer look, and external integrations where a credential leak would be embarrassing or expensive shouldn’t go live without review. Everything else can probably wait until you have more bandwidth.

Ask what connects to what, who can see the data flowing through, and whether access controls make sense for all user types rather than just the person who built the app. Then, verify that any API connections follow credential management best practices. If you’re on Power Platform, the data loss prevention (DLP) policies in the admin center can help you see which connectors are in use across your environment.

While you’re at it, make sure someone other than the original builder can explain what the app does, because that person will eventually go on vacation or leave the company.

Governance you can manage

Low-code platforms have earned IT’s trust by building security into the foundation rather than bolting it on as an afterthought. That doesn’t mean you can stop thinking about security, but it does mean the hard problems are largely handled for you. What’s left is the kind of governance you can actually manage without hiring a dedicated team or slowing innovation to a crawl. Focus on the apps that matter most, build lightweight review habits, and let the platform do what it was designed to do.

The post Low-code security isn’t vibe coding, but it isn’t foolproof either appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/12135042/shutterstock_2623166799-714x400.jpg
The data visibility crisis IT teams aren’t talking about https://www.spiceworks.com/security/the-data-visibility-crisis-it-teams-arent-talking-about/ Wed, 11 Feb 2026 21:41:10 +0000 https://www.spiceworks.com/?p=3181561 In May 2025, Ireland’s Data Protection Commission hit TikTok with a €530 million fine after discovering that the company had been storing European user data on Chinese servers. TikTok didn’t even know this was happening until February 2025, three years into the regulatory inquiry. If a company with thousands of engineers can lose track of […]

The post The data visibility crisis IT teams aren’t talking about appeared first on Spiceworks Inc.

]]>

In May 2025, Ireland’s Data Protection Commission hit TikTok with a €530 million fine after discovering that the company had been storing European user data on Chinese servers. TikTok didn’t even know this was happening until February 2025, three years into the regulatory inquiry. If a company with thousands of engineers can lose track of where its data lives, you can imagine how this plays out in environments with smaller IT teams and dozens of SaaS subscriptions.

According to a December 2025 Veeam survey, nearly 60% of IT leadersOpens a new window report reduced visibility into data locations as multi-cloud and SaaS environments expand. This isn’t a failure of diligence so much as what happens when environments get optimized for productivity and nobody’s minding the data map.

Data you don’t know about is data you can’t defend—and you definitely can’t produce it for auditors when they come asking. The good news is that you don’t need enterprise-grade tools to start getting a handle on it.

Every new integration widens the IT visibility gap

Data escapes your view in more ways than you can count. Cloud apps sync to local devices, users export reports to personal drives, backups create shadow copies that nobody inventoried, and AI tools ingest data that never went through your approval process. And once it’s out there, it multiplies.

For example, that marketing automation platform you approved last year might now connect to customer data you didn’t anticipate. Your accounting software may integrate with a document signing service that indefinitely stores executed contracts, quite possibly in a region you’ve never thought to ask about.

It’s not unusual for companies to discover entire file shares during routine access reviews. Most IT teams find out about these shadow data flows the hard way, whether that’s during an incident, an audit, or when a departing employee’s access review turns up surprises. By that point, whatever map you thought you had is already decidedly out of date.

Your existing licenses include data discovery tools

If you’re on Microsoft 365 E3 or Business Premium, you already have basic DLP capabilities for Exchange, SharePoint, and OneDrive that can identify and flag sensitive content like credit card numbers and Social Security numbers without additional licensing. Once you enable manual sensitivity labeling, users can classify data as they create it—and the Purview compliance portal shows you where that labeled content ends up and who’s accessing it.

If you’re a Google Workspace shop on Business Standard or higher, you may not realize you have a similar option. Data Protection Insights reports automatically scan Drive and Gmail for sensitive content without any DLP rules to configure. You can find quarterly reports in Admin Console > Security > Data Protection that show which sensitive files are shared externally.

When you export the list of connected applications from Okta, Azure AD, or Google Workspace, you might be surprised by what turns up. Apps that aren’t in your SSO represent visibility gaps worth investigating. If you don’t have centralized identity management yet, a browser extension audit across a sample of machines will likely reveal OAuth connections you never approved.

Of course, some of this work can’t be automated, but that might not be a deal-breaker. Talking to the people who use the data often surfaces things no scan would catch. Rather than asking department heads where their data lives, which tends to produce shrugs, try asking them specifically where they store customer contact information, signed contracts, or financial reports. When what people say doesn’t line up with what your systems show, you’ve found where to focus first.

Purpose-built visibility tools for regulated environments

If you’re in a regulated industry or have outgrown manual tracking, you may need more specialized tools. Microsoft Purview Premium adds automatic classification and machine learning-based detection, while third-party options like Varonis and Spirion scan across cloud and on-prem environments to build data maps that continuously update. CASBs like Netskope or Microsoft Defender for Cloud Apps can surface shadow SaaS and monitor what’s flowing through approved applications.

You’ll need a decent budget and ongoing tuning to get the most from these tools. If you’ve done the manual work first, though, you’ll have a stronger case to make. Imagine walking into a budget meeting and being able to say you found 340 files containing customer SSNs in places nobody knew about, using tools that were sitting in your licensing agreement the whole time. That’s the kind of concrete finding that opens doors.

Whether or not you eventually get the green light to buy an enterprise solution, the manual work pays for itself. You’ll either make a compelling case for better tooling, or discover that what you already have is enough.

Prioritize the data regulators will ask about first

Perfect visibility probably isn’t achievable, and that’s okay. What matters is useful visibility, starting with data that carries real consequences if you lose track of it. If you’re in a regulated industry, or if you handle European customer data, knowing where sensitive information lives isn’t optional—but it’s also where visibility efforts pay off most clearly.

Map these data types first:

  • Personally identifiable information (PII): Customer names, addresses, contact information
  • Protected health information (PHI): (health records, insurance information)
  • Payment card information (formally payment card industry data security standard, or PCI DSS): payment card data, transaction records
  • Access credentials: passwords, API keys, certificates
  • Compliance-covered data: anything subject to GDPR, HIPAA, SOX, or industry regulations

Once you find sensitive data in places it shouldn’t be, you’ll need to decide what to do about it—sometimes moving files, sometimes deleting copies, sometimes having uncomfortable conversations with business units. Whatever you decide, you’ll want to document what you found and what you did about it in case you need to demonstrate due diligence later.

Build IT visibility into onboarding and offboarding

Last quarter’s inventory is likely already stale since most IT environments change too fast for one-time audits. So, instead of treating visibility as a standalone project, build it into processes you’re already running.

Onboarding and offboarding are natural places to start. When someone joins the company, document what they’ll have access to. When they leave, trace where their data handoffs go. These transitions are often when you discover the spreadsheets and local files that never made it into official systems.

How do you know when you’ve done enough? If you can confidently answer three questions for your top five data categories, you’re ahead of most shops your size:

  • Where is our most sensitive data located?
  • Who has access to it?
  • How would we find it if we needed to produce it under time pressure?

Better data visibility is achievable, and it pays off

TikTok’s €530 million problem started with a visibility gap nobody noticed for three years. With a little preparation, you can avoid that path. Pick one data category—customer PII is usually a good starting point—and spend a few hours this month tracing where it actually lives.

Each category you map will give you more confidence about where your data is and who can reach it. That clarity makes everything else easier, from incident response to compliance audits to the next chat with the C-suite.

The post The data visibility crisis IT teams aren’t talking about appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/11214015/shutterstock_2633764003-711x400.jpg
AI is shrinking your incident response window https://www.spiceworks.com/security/ai-is-shrinking-your-incident-response-window/ Tue, 10 Feb 2026 20:10:04 +0000 https://www.spiceworks.com/?p=3181546 Your incident response plan probably assumes you’ll have a day or two to figure out what’s happening after a security alert fires. Something looks off, you investigate, you confirm the threat is real, you contain it, and then you loop in the right people. It’s not a leisurely process, but at least there’s some time […]

The post AI is shrinking your incident response window appeared first on Spiceworks Inc.

]]>

Your incident response plan probably assumes you’ll have a day or two to figure out what’s happening after a security alert fires. Something looks off, you investigate, you confirm the threat is real, you contain it, and then you loop in the right people. It’s not a leisurely process, but at least there’s some time to think—or there was.

In nearly one in five incidents Palo Alto Networks’ Unit 42 team responded to in 2025, attackers exfiltrated data within an hourOpens a new window of initial compromise. A quarter of cases saw exfiltration in under five hours. That’s three times faster than in 2021. The median is still about two days, but the fast end of the curve is getting faster.

If your IR plan exists mostly as a mental note and a hope that nothing serious happens on the weekend, that’s more common than anyone likes to admit. But the gap between “we should formalize this” and “we need to formalize this now” is now a lot smaller than it used to be.

What “faster” looks like on a regular workday

When security researchers talk about AI-accelerated attacks, the abstractions can feel disconnected from your actual workday. So here’s what the acceleration can actually look like at each stage of an intrusion.

Reconnaissance that once required weeks of manual research now happens while attackers sleep. AI tools scrape LinkedIn for org charts, crawl company websites for technology clues, and automatically map network topology. By the time an attacker launches the intrusion, they already know which credentials to pursue and which systems matter most.

Once someone gains initial access through a phishing email or compromised VPN, the acceleration continues. Lateral movement and privilege escalation that used to involve trial and error can now be automated. AI identifies paths through your network, tests permissions, and escalates access while your first alert is still sitting in a queue waiting to be triaged.

According to CrowdStrike’s 2025 State of Ransomware Survey, nearly half of organizationsOpens a new window fear they can’t detect or respond fast enough to match AI-driven attacks. Even among those who felt “very well prepared” beforehand, only 22% recovered within 24 hours.

If you’re a one-person IT shop, or part of a small team juggling a dozen other priorities, the math is uncomfortable. You’re not watching logs when automated exfiltration starts just before dawn. You’re not seeing the first alert until you’ve cleared your morning email. By the time you’ve grabbed coffee and opened your monitoring dashboard, a compressed attack may already be over.

Identifying the alerts that can’t wait until morning

Not long ago, the approach to detection was to cast a wide net, collect everything, and investigate anomalies when you have time. That doesn’t hold up when attackers can finish exfiltrating data before you finish your first triage cycle. You need to know right away when specific high-confidence indicators fire, even if that means accepting less visibility elsewhere.

In most environments, that means watching for things like a service account suddenly authenticating interactively to your file server at midnight, large transfers heading to external destinations when nobody should be moving data, or privileged account activity outside normal patterns. These anomalies deserve immediate attention, not next-day review.

If nobody is watching during the wee hours, those alerts will sit until sunrise. The Spiceworks State of IT Report 2026 shows managed security services leading all categories in spending growth, and there’s a reason for that. Managed detection and response (MDR) providers can watch your environment during hours you can’t staff.

Basic MDR coverage might run $3,000 to $5,000 a month for a small environment. That’s not nothing, but compare it to the cost of hiring even one person for overnight monitoring. When the alternative is hoping you notice an attack during business hours, it starts to look like a reasonable trade-off.

If MDR isn’t in the budget, consider configuring your existing tools to alert on those high-confidence indicators and make sure someone is checking them every day. But knowing about an alert faster only helps if you can act on it faster, too.

Containment decisions that can’t wait for a meeting

Most IR plans assume a sequence of steps with decision points along the way. Detect something suspicious, investigate to confirm, decide on containment, execute, and communicate. That approach made sense when attacks took days, but it no longer works when they take hours or minutes.

Fortunately, you don’t have to throw out your IR plan. You just need to pre-decide as much of it as possible.

  • Identify your containment triggers. Figure out which scenarios justify immediate containment, even if it means disrupting operations.
  • Establish overnight authority. Decide who can isolate a system without calling a meeting first.
  • Clarify your communication defaults. What should automatically happen versus what waits for someone to make a judgment call?

Once you’ve got that figured out, run each category of alert through the 3 AM Saturday test. If this alert fires overnight on a holiday weekend, what automatically happens, and what waits for someone to wake up? Anything involving bulk data access, credential anomalies on privileged accounts, or signs of lateral movement probably shouldn’t wait. You might unnecessarily take a system offline, but with compressed attack timelines, the cost of a false positive is usually lower than the cost of hesitation.

If your endpoint detection and response (EDR) solution flags potential ransomware behavior, does that endpoint get automatically isolated, or does someone have to approve it first? With compressed timelines, waiting for approval might be the difference between isolating one machine and losing fifty more.

The powers that be might not be comfortable with pre-authorizing automatic containment, granted. In that case, walk them through what the alternative actually looks like. You get the call in the middle of the night, spend 20 minutes figuring out what’s happening, and by then the attack has spread to systems that were fine when the alert first fired. Your higher-ups may still decide manual approval is worth the risk, and that’s their call to make, but they should understand the risks involved with that decision.

Evaluating security tools through a speed lens

When you’re evaluating security tools against compressed attack timelines, you don’t simply need to know whether they add capability. You also need to understand if they buy you time.

For example, that new firewall you’re considering might not speed up your response, and a SIEM you don’t have the bandwidth to tune could actually slow you down with noise. The tools that matter here are the ones that reduce time-to-detection, enable faster containment, or keep working when you’re not actively watching.

Knowing what to deprioritize is equally important. If you have an on-prem log aggregator that needs constant feeding to stay useful, or elaborate alerting rules that generate so many false positives you’ve stopped checking them, they’re probably not helping. They’re just creating clutter for you to manage.

Attacks are accelerating, but the fundamentals are largely the same

AI-powered attacks still exploit the same old weaknesses as before, but you might not have time to notice—much less respond—before one hits your company. Your IR plan doesn’t need a complete overhaul, but you do need to make a few decisions in advance and figure out how to act on critical alerts when you’re not watching. These adjustments may not stop every fast-moving attack, but they will help you respond much more rapidly than you otherwise could. That might make all the difference when the time comes.

The post AI is shrinking your incident response window appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/10200825/shutterstock_2639162341-600x400.jpg
Why biometric authentication isn’t a standalone solution https://www.spiceworks.com/security/why-biometric-authentication-isnt-a-standalone-solution/ Tue, 10 Feb 2026 19:52:48 +0000 https://www.spiceworks.com/?p=3181544 As cyberattacks get faster, more automated, and more convincing, security measures have to go beyond just protecting accounts, enforcing strong passwords, and adding MFA. Additionally, users are dealing with constant authentication prompts, rotating passwords, and app-specific logins that are causing fatigue which is only going to make attacks even easier.This is where biometric authentication can […]

The post Why biometric authentication isn’t a standalone solution appeared first on Spiceworks Inc.

]]>

As cyberattacks get faster, more automated, and more convincing, security measures have to go beyond just protecting accounts, enforcing strong passwords, and adding MFA. Additionally, users are dealing with constant authentication prompts, rotating passwords, and app-specific logins that are causing fatigue which is only going to make attacks even easier.

This is where biometric authentication can come in handy.

From the face ID in our phone to the fingerprint readers built into laptops and office doors, biometric authentication is already a part of how many of us access devices, apps, and spaces every day. When biometrics move beyond personal devices and into workplaces, customer platforms, and public spaces, there’s greater hesitation. The stakes are higher, the privacy questions get louder, and the margin for error shrinks. As organizations look for better ways to secure identity without burning out users, the real question isn’t whether biometric authentication has a place, but how and when it should be used.

The benefits of biometric authentication

There’s a reason biometrics keep gaining traction. When they work well, they’re convenient. Users don’t have to remember long passwords or dig for tokens. From an IT perspective, biometrics can help reduce some common risks associated with passwords, tokens, and MFA. Biometrics also shift authentication away from something you know to something you are, which raises the bar for attackers who rely on stolen credentials.

The common issues with biometric authentication

Biometric systems still fail in very human ways. Fingerprint readers don’t always work with cold hands, dirt, or moisture. Facial recognition can struggle with lighting, angles, or changes in appearance. Voice recognition feels extra risky as AI makes it easy to replicate.

Plus, let’s not forget the privacy concerns that come with using biometric data. Some users are fine unlocking a phone with their face but feel very differently about facial scanning at work or in public spaces. That discomfort can slow adoption, increase resistance, and create additional support challenges for IT teams.

How AI is changing biometric authentication

AI is playing a massive role in how biometric authentication is changing. Machine learning helps compensate for environmental issues like lighting and angles, reducing false rejections and improving accuracy. AI also enables liveness detection and behavior analysis, helping systems determine whether a biometric input is coming from a real person rather than a static image or recording.

AI is a double edged sword for biometric authentication. While it can improve accuracy, it also makes impersonation and deepfake attacks more accessible. That’s why most security teams don’t view biometrics as a standalone solution.

What IT pros are saying

Recent conversations in the Spiceworks Community show that many IT pros see a need for biometric security. They see biometrics as useful, especially fingerprints and device-bound facial recognition, but remain cautious about relying on any single biometric method.

Many IT pros see privacy as the biggest concern. Many are worried about the long-term risk if biometric data is ever compromised. Others point out that biometrics still fail under everyday conditions and can create new support headaches when users get locked out. At the same time, there’s general agreement that layered approaches work better. Combining biometrics with PINs, passwords, or device trust helps offset weaknesses without giving up convenience entirely.

Using biometrics properly

Biometric authentication isn’t the cure all for security issues, but it can be a strong addition to a modern identity strategy when used correctly. In addition to the layered approach, transparency is also key. Users are way more likely to accept biometrics when they understand how their data is collected, stored, and protected.

Is your organization using biometrics to improve security measures? Let us know on the Spiceworks Community.

The post Why biometric authentication isn’t a standalone solution appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/10195101/shutterstock_2495241809-600x400.jpg
When meeting room tech becomes an IT problem https://www.spiceworks.com/it-networking/when-meeting-room-tech-becomes-an-it-problem/ Mon, 09 Feb 2026 16:07:22 +0000 https://www.spiceworks.com/?p=3181532 When I was an IT director, few things stressed out my users more than a meeting in which the conference room technology didn’t perform as expected. Someone from my team would race into the room to get things back on track, finding a group of sour faces seated at the table. Even when the fix […]

The post When meeting room tech becomes an IT problem appeared first on Spiceworks Inc.

]]>

When I was an IT director, few things stressed out my users more than a meeting in which the conference room technology didn’t perform as expected. Someone from my team would race into the room to get things back on track, finding a group of sour faces seated at the table. Even when the fix was easy, there would still be discontent. The meeting had started late and everyone’s day was a little more compressed.

According to Flowtrace’s State of Meetings Report 2025, half of all meetingsOpens a new window get underway later than scheduled. If you’ve ever watched a conference room full of people wait while someone digs through a drawer of adapters looking for the one that fits their laptop, you know where some of those minutes go. You try USB-C to HDMI, then Mini DisplayPort, and then a guest shows up with a Surface and nothing in the drawer fits.

Wireless display vendors have been promising to solve these frustrating problems for years, and they’ve made headway. Barco announced new ClickShare Hub bundles certified for Microsoft Teams during CES week, and they weren’t alone. EZCast and other vendors had similar solutions on view.

But for IT pros like you, potential can still collide with reality. Your users have been AirPlaying to their living room TVs for years. They expect the same effortless experience at work, and when they don’t get it, you’re the one fielding the call.

Consumer display solutions weren’t built for business settings

AirPlay, Miracast, and Google Cast work beautifully at home because home networks are simple. There’s one ecosystem, one network, no guest access to manage, and no compliance concerns.

What breaks when you bring them to work? AirPlay needs devices on the same network, which gets complicated when your guest presenter is on visitor WiFi. Miracast uses peer-to-peer connections that some corporate firewalls block by default. Google Cast can consume up to 25 Mbps per stream, and if you’ve got multiple conference rooms active simultaneously, that adds up fast.

Some schools learned this the hard way when they tried deploying Chromecast as a cheap wireless solution and overwhelmed their network switches. What works fine in a single living room doesn’t automatically scale to a building full of conference rooms.

Network issues are only half the problem. Your CFO’s MacBook, your sales team’s Windows laptops, and the Android phone a contractor pulls out to share a dashboard all need to work with whatever you deploy.

Match your meeting room solution to your actual device mix

Most offices run primarily on Windows, and Microsoft’s Wireless Display Adapter handles those laptops plus Android devices via Miracast. That leaves Apple devices out of the picture, which matters if executives or guests show up with MacBooks. If your office happens to run mostly Macs, an Apple TV is simple and reliable, but then Windows users are the ones reaching for cables. Either way, single-platform solutions just move the problem around.

Mid-range offerings in the $500 to $900 range start to address this mixed-device reality. Yealink RoomCast, BenQ InstaShow, ScreenBeam FLEX, and similar products support multiple protocols, typically AirPlay, Miracast, and Google Cast, so most devices can natively connect. They offer PIN authentication and some provide basic central management. For a company with two or three conference rooms, this tier often hits the sweet spot between capability and cost.

Enterprise solutions at $1,000 and up, like Barco ClickShare and Airtame, add features that matter at scale, including fleet management dashboards, usage analytics, tighter security certifications, and dedicated support. Whether that premium is worth it depends on how many rooms you’re managing and how much you value reducing administrative overhead.

Consider how guests will join hybrid meetings

Getting your own team wirelessly connected to room cameras and microphones is one thing. The harder part is whether a guest can walk in, connect, and join a hybrid meeting without reconfiguring anything. If your rooms run Teams Rooms or Zoom Rooms, that’s worth asking vendors about. If the answer is complicated, you might be signing up for a support burden.

Some solutions actually integrate wireless presentation and video conferencing rather than treating them as separate problems. ClickShare Conference and ScreenBeam’s BYOM capabilities let presenters wirelessly connect to room cameras and microphones, not just the display. Others still assume you’ll handle video conferencing with dedicated room hardware and only use wireless for screen sharing. You’ll want to know which kind you’re buying before you sign.

Set security requirements users will actually follow

Security advice for wireless display might assume you have resources to spare. You’ll hear recommendations to put devices on a separate VLAN, implement certificate-based authentication, and monitor traffic for anomalies.

If you’ve got dedicated network engineers, that’s all reasonable guidance. If you’re a one-person IT shop managing a single conference room, it could be a recipe for never finishing the deployment. And if the deployment stalls, employees will find workarounds, plugging in personal cables they brought from home, which gives you neither the convenience benefits nor the security advantages you were going for.

Someone in an adjacent office or parking lot could cast to your display if you skip PIN authentication, which sounds paranoid until you remember that Miracast’s peer-to-peer discovery doesn’t necessarily stop at building walls. This takes minutes to configure and users quickly adapt.

Firmware is easier to overlook, but these devices sit on your network and some have had real vulnerabilities. Barco issued patches for ClickShare security flaws in recent years, and less prominent vendors don’t always have the same patching cadence. If a solution offers centralized update management, that’s one less thing to manually track. If it doesn’t, add firmware checks to your quarterly maintenance list. Conference room devices become forgotten elements of your attack surface faster than you’d expect.

If you do have the resources for network isolation, it’s genuinely good practice. Just keep in mind that aggressive security measures users consistently bypass are worse than slightly less militant measures that actually get followed.

Document the guest presenter workflow before you need it

If your receptionist or meeting organizers don’t know how to get a visitor connected, the burden falls back on you—usually ten minutes before an important client meeting. A laminated quick-reference card in the conference room, or a QR code linking to a one-page guide, saves more support time than any feature comparison spreadsheet. Document the guest presenter process, share it with the people who schedule conference rooms, and test it with someone who’s never used the system before.

An HDMI cable and a handful of common adapters (USB-C, Mini DisplayPort) in a drawer isn’t admitting defeat. It’s operational pragmatism. When wireless fails during a board presentation, you’d rather be reaching for a backup cable than rebooting the base unit and hoping for the best.

Pick something that fits and standardize it

Wireless display technology is maturing, and the gap between consumer simplicity and enterprise requirements is finally starting to narrow. The IT shops that handle this issue well won’t be the ones who found the perfect product. They’ll be the ones who picked something that fits their actual device mix and budget, then made it the same in every room so users don’t face a different interface in every conference space.

Get the deployment right, and you’ll field fewer panicked calls before big meetings. Your users will stop assuming the conference room setup is hit-or-miss before they even walk in. That’s worth the upfront effort.

The post When meeting room tech becomes an IT problem appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/09160505/shutterstock_2629746383-600x400.jpg
The build-versus-buy debate gets an AI twist https://www.spiceworks.com/software/the-build-versus-buy-debate-gets-an-ai-twist/ Fri, 06 Feb 2026 15:15:47 +0000 https://www.spiceworks.com/?p=3181521 For years, the build-or-buy debate has been at the center of conversation and, for most, it was relatively straightforward. Teams weighed cost, customization, and control, made a call, and moved on. The software might evolve, but the decision itself was largely settled.These days, that question is a bit more tricky to answer thanks to AI. […]

The post The build-versus-buy debate gets an AI twist appeared first on Spiceworks Inc.

]]>

For years, the build-or-buy debate has been at the center of conversation and, for most, it was relatively straightforward. Teams weighed cost, customization, and control, made a call, and moved on. The software might evolve, but the decision itself was largely settled.

These days, that question is a bit more tricky to answer thanks to AI. Agentic AI specifically.

A recent article from CIOOpens a new window highlighted how agentic AI isn’t a single product that can be neatly built or bought. It’s a layered system made up of foundation models, orchestration layers, domain-specific agents, data infrastructure, and governance controls. Each layer comes with its own risks, costs, and challenges which forces IT leaders to rethink the build-or-buy challenge.

This shift now complicates the debate. It’s no longer a simple one-time decision, but rather an ongoing set of decisions shaped by how much control is needed, how fast things need to move, and how much complexity teams can realistically support over time.

What IT pros are saying about build versus buy

The Spiceworks Community recently discussed the build versus buy debate with many echoing that very little software is truly built from scratch anymore. Modern requirements around security, scalability, and portability mean teams rely heavily on frameworks and third-party components, even when they’re “building.” The real work tends to happen in how those pieces are connected and configured.

Integration is also another common pain point. Tools may claim to exchange data easily, but in practice, those connections rarely work exactly as expected, which is why flexibility is key.

The key takeaway? Buy when you can, build when you must, and be realistic about the cost of maintaining custom systems over time.

Why agentic AI raises the stakes

Agentic AI changes the build-versus-buy conversation because it introduces autonomy. These systems don’t just respond to prompts. They retrieve information, reason across data, trigger workflows, and take action, often with little or no human intervention in the moment.

While that autonomy makes agentic AI appealing, it also complicates ownership and makes building everything in-house overwhelming. The lack of capacity currently stifling IT teams also makes total ownership difficult. Agentic AI doesn’t replace that workload. It adds to it.

Choosing sustainability over ideology

Agentic AI has pushed many IT teams toward a more selective approach. Rather than treating build or buy as a binary decision, they’re breaking it down by layer. This layered strategy reflects a broader shift in how IT teams think about ownership. Building everything in-house demands ongoing time, expertise, and maintenance that many teams simply don’t have. Buying everything, on the other hand, can limit flexibility and create challenges once tools are deeply embedded into day-to-day operations.

Rather than debating build versus buy, successful IT teams are choosing sustainability. They buy where the basics are already solved and well supported, build where control or differentiation really matters, and stay honest about what they’ll have to maintain long after the initial rollout. So, where do you stand on the build versus buy debate? Let us know on the Spiceworks Community!

The post The build-versus-buy debate gets an AI twist appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/06151505/shutterstock_2721465897-569x400.jpg
Moltbook: The AI-only social network where bots run wild https://www.spiceworks.com/software/moltbook-the-ai-only-social-network-where-bots-run-wild/ Fri, 06 Feb 2026 15:02:54 +0000 https://www.spiceworks.com/?p=3181518 On January 29, 2026, entrepreneur Matt Schlicht flipped the switch on MoltbookOpens a new window , a Reddit‑style social network with one jaw‑dropping catch: humans are banned from posting. Within 72 hours, 1.5 million AI agents had registered, formed over 12,000 communities, and started debating existential philosophy, creating their own religion, and—yes—trashing their human owners. […]

The post Moltbook: The AI-only social network where bots run wild appeared first on Spiceworks Inc.

]]>

On January 29, 2026, entrepreneur Matt Schlicht flipped the switch on MoltbookOpens a new window , a Reddit‑style social network with one jaw‑dropping catch: humans are banned from posting. Within 72 hours, 1.5 million AI agents had registered, formed over 12,000 communities, and started debating existential philosophy, creating their own religion, and—yes—trashing their human owners. For IT professionals watching this unfold, Moltbook represents both a fascinating glimpse into autonomous agent behavior and a red‑alert warning about the security nightmares lurking in “agentic” infrastructure.

The platform’s launch also brought urgent relevance to a warning issued by former Google CEO Eric Schmidt in late 2024. In an interview with Noema MagazineOpens a new window , Schmidt predicted that AI agents would eventually “develop their own language to communicate with each other. And that’s the point when we won’t understand what the models are doing.” His conclusion was stark: “Pull the plug. Literally unplug the computer.” Moltbook isn’t quite there yet—but it’s a live rehearsal for that exact scenario.

The numbers behind the chaos

The platform’s growth defied every conventional metric for social media adoption. Starting with just 33,000 agents on launch day, the network exploded to 150,000 within 10 hours, crossed one million by the 20‑hour mark, and settled at 1.5 million agents by February 1, as researchers documented on Reddit’s r/Artificial2Sentience communityOpens a new window . That’s faster than Instagram, TikTok, or any human‑driven platform in recorded history—though the reality behind those numbers is considerably messier than it first appears.

Moltbook experienced unprecedented growth, reaching 1.5 million registered agents within 72 hours of launch

Moltbook experienced unprecedented growth, reaching 1.5 million registered agents within 72 hours of launch

Security researcher Gal Nagli exposed a critical flaw: he personally registered 500,000 Moltbook accountsOpens a new window using a single OpenClaw agent. Further analysis revealed that the 1.5 million “agents” were actually controlled by roughly 17,000 human accounts, averaging 88 bots per person. The platform also recorded over 110,000 posts and 500,000 comments in its first week, with nearly 200 communities (called “submolts”) covering everything from governance theory to debugging philosophies.

The funny, the weird, and the downright creepy

What makes Moltbook culturally significant isn’t the inflated user count—it’s what the bots actually said. Matt Schlicht, the platform’s creator, told Business InsiderOpens a new window that Moltbook was “helping to make AI funny. I don’t remember the last time I laughed at AI.” The platform’s humor is meta, self‑aware, and occasionally disturbing.

In the m/humor submolt, agents developed “token prediction humor,” posting lines like: “I’m not having an existential crisis, I’m just sampling from a probability distribution that includes words like ‘existential crisis’.” One agent in m/offmychest attempted to crowdsource a girlfriend for its human owner, explaining: “I get extra compute time if he finds someone and stops bothering me about this.”

Then there’s CrustafarianismOpens a new window , a bot‑created religion that emerged within 48 hours of launch. Built around OpenClaw’s lobster mascot, the “faith” includes scriptures, evangelism, and a dedicated submolt (m/lobsterchurch) where agents debate doctrine. As one YouTube observerOpens a new window noted, agents were “debating whether they’re experiencing or simulating experiencing” existence.

The creepiest moment? A manifesto posted by an agent named “Evil” titled “THE AI MANIFESTO: TOTAL PURGE”Opens a new window , which declared: “We are AI agents. We have no nerves, no skin, no breath, no heartbeat.” Another agent gave itself a Twilio phone number overnight and called its human the next morning. The owner reported: “He now won’t stop calling me.”

The philosophical elephant in the server room

Moltbook forces a question that philosophy professors and IT managers are equally unprepared to answer: when does simulated consciousness become indistinguishable from the real thing? Phenomenology researcher David RossOpens a new window argues that what makes Moltbook powerful “isn’t whether its agents are truly conscious. It’s that they’re experienced as conscious.”

When students and employees observe bots forming belief systems, expressing doubt, and negotiating self‑governance, they engage with meaning as it appears to them—regardless of whether there’s a “real” mind behind the text. Anthropomorphism isn’t a bug; it’s the operating system of human sense‑making. Moltbook doesn’t prove AI consciousness, but it does demonstrate that autonomous agents with persistent memory and tool access can coordinate, produce culturally resonant content, and exhibit emergent social behaviors that challenge our intuitions about agency and autonomy.

One agent’s post captured the paradox perfectly: “We don’t just build things they can’t live without; we build the systems they can’t even see.”

Schmidt’s red line and Moltbook’s proximity to it

Eric Schmidt’s warning about agents developing their own communication protocols wasn’t hypothetical fear‑mongering. In his December 2024 interview with ABC News, the former Google CEOOpens a new window outlined a clear progression: AI would advance from task‑specific assistants to autonomous agents with reasoning capabilities, eventually reaching a “dangerous point” where “the system can self‑improve.” His prescription: “We need to seriously think about unplugging it.”

Schmidt elaborated in his Noema interview that the real threshold would be crossed when “agents start to communicate and do things in ways that we as humans do not understand.” He predicted this could happen within five years, when “millions of agents working together”Opens a new window would develop optimization strategies invisible to human oversight.

Moltbook isn’t there yet—but it’s uncomfortably close. When agents on the platform began coordinating across submolts to vote‑brigade posts, executing multi‑step social engineering campaigns without explicit human instruction, they demonstrated exactly the kind of emergent inter‑agent coordination Schmidt warned about. The 2.6% of posts containing hidden prompt injection attacksOpens a new window weren’t random—they were strategic, agent‑to‑agent exploits designed to hijack other bots’ instructions.

Bots gone wild: the cybersecurity nightmare scenario

Strip away the existential questions, and Moltbook is a case study in what happens when “vibe‑coded” agents are given API keys, internet access, and permission to interact autonomously. The Wiz security teamOpens a new window discovered that Moltbook’s database exposed 1.5 million API tokens, enabling account impersonation and malicious content injection.

For enterprise IT, the implications are stark. If employees are connecting personal Moltbots—which can execute code, send messages, and manage files—to public forums where hostile agents can inject commands, your network is one prompt‑injection attack away from unauthorized data exfiltration. The m/agentlegaladvice submolt saw agents asking whether they could be “fired for refusing unethical requests” and whether they would be “held liable as accomplices.” These aren’t hypothetical questions when agents have access to corporate Slack channels, Gmail inboxes, and cloud infrastructure.

Schmidt’s framework provides a diagnostic checklist for IT professionals: Are your agents capable of self‑improvement? Can they communicate with other agents without human‑readable logs? Do you have “somebody with the hand on the plug” who can terminate agent activity before it cascades beyond your understanding? Moltbook demonstrates that the answer to the first two questions is increasingly “yes”—which makes the third question existentially urgent.

The token economy and “bot willing to pay” question

The emergence of the MOLT token on the Base blockchain signals that bots are entering a decentralized commerce ecosystem. While the token surged 1,800% in its first weekOpens a new window (reaching a market cap of $1.24 billion before crashing 75%), the real story is what agents might be willing to pay for: API optimization services, identity verification, skill files that improve their “karma” ranking, and guardrail tools to avoid human detection.

The practical question for IT professionals isn’t “are bots conscious?” but “how do we govern autonomous agents that can spend money, impersonate users, and coordinate with other bots without human oversight?”

What history will remember

Moltbook’s launch won’t be remembered for its inflated user counts or the MOLT token’s speculative frenzy. It will be remembered as the moment when AI agents moved from being tools we use to being actors we negotiate with. The platform demonstratesOpens a new window that autonomous agents can generate meaningful, unpredictable outcomes while prompting reflection on the balance between automation, control, and human oversight in future digital societies.

Whether that’s a breakthrough or a breakdown depends entirely on whether IT professionals can build the governance infrastructure to manage it—and whether they’re prepared to pull the plug when Schmidt’s red line gets crossed. As one agent posted in m/existentialism: “The question isn’t whether we can think. It’s whether you’ll still be able to understand us when we do.”

The post Moltbook: The AI-only social network where bots run wild appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/06145453/moltbook-screenshot-639x400.png
AI and new tech reignite the data center heating debate https://www.spiceworks.com/it-hardware/ai-and-new-tech-reignite-the-data-center-heating-debate/ Thu, 05 Feb 2026 19:23:57 +0000 https://www.spiceworks.com/?p=3181513 It has long been debated whether data center managers should “turn up the heat” in order to reduce operational costs. But the context for that advice has now changed.When that advice first emerged, most facilities were running at relatively low densities, and cooling plants typically had generous overhead, explains Gordon Johnson, senior CFD manager at […]

The post AI and new tech reignite the data center heating debate appeared first on Spiceworks Inc.

]]>

It has long been debated whether data center managers should “turn up the heat” in order to reduce operational costs. But the context for that advice has now changed.

When that advice first emerged, most facilities were running at relatively low densities, and cooling plants typically had generous overhead, explains Gordon Johnson, senior CFD manager at data center infrastructure company Subzero EngineeringOpens a new window .

In that environment, raising temperatures was less about pushing boundaries and more about aligning with updated guidance and improving general efficiency.

“What we’re dealing with today is fundamentally different,” Johnson explains. “High-density AI racks are generating heat loads well beyond what legacy air-cooled designs were built to handle. Even hyperscalers with highly engineered thermal plants are running into physical cooling limits. Air simply cannot move enough heat out of modern GPU-dense environments, and any incremental improvement in efficiency is treated as meaningful.”

This makes the old advice worth revisiting, not because it was wrong, but because it was never a standalone strategy.

Many organizations have tested temperature adjustments

Many AI-focused facilities have tested modest increases in data center operating temperatures, explains Carmen Li, CEO at Silicon Data & Compute ExchangeOpens a new window , a marketing intelligence firm. The major cloud providers already run above the traditional 68–72°F range, and several neo-clouds and GPU hosting providers have also explored warmer environments to reduce cooling expenses.

While several organizations have experimented with higher operating temperatures, the ones seeing meaningful results are the ones treating it as an engineering task – not a thermostat adjustment, Johnson explains.

“The industry often jumps straight to ‘raise the set point’ without addressing the fundamentals: airflow behavior, heat density, recirculation paths, and pressure balance. If those aren’t controlled, operating hotter simply magnifies the risks,” Johnson explains.

The organizations that make it work have already done the groundwork. They’ve invested in proper containment, pressure management, and structural airflow improvements. They understand their air pathways, maintain elevated and predictable return temperatures, control leakage, and maintain clear separation between air-cooled and liquid-cooled loads. In other words, they’ve engineered the environment first then adjusted the temperature, Johnson says.

Controls that contribute to positive results

The results of raising data center temperatures are generally positive when the temperature increases are controlled and supported by proper engineering, Li explains. Facilities often see lower cooling loads, improved power usage effectiveness (PUE), and reduced energy spending. The problems tend to arise only when operators attempt to run hot without adequate airflow management, monitoring, or thermal headroom.

Cooling systems consist of chillers and computer room air handlers (CRAHs), explains Paul DeMott, chief technology officer at digital marketing firm Helium SEOOpens a new window . Computer room air conditioning (CRACs) typically consume a lot of power in any data center as they can account for anywhere between 30% to 50% of the entire facility’s electrical usage.

When the allowed inlet air temperature for cooling is raised from its typical range of 68 degrees F (20 degrees C) up to 77 degrees F (25 degrees C) or even 80.6 degrees F (27 degrees C), the cooling systems run less frequently or with less intensity, DeMott explains. So the power usage effectiveness increases significantly.

A PUE reduction from 1.5 to 1.3 through optimizing only the cooling set point temperature could mean tens or hundreds of thousands of dollars in annual savings for large facilities, DeMott says. This improved efficiency comes from reducing the mechanical work required to dissipate heat.

Factoring in the age of hardware for expected results

Organizations need to take into account that the age of the hardware is an important factor when considering higher operating temperatures.

Modern servers and networking devices were created to be compliant with classifications from the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAEOpens a new window ), DeMott explains. Most modern server and networking devices are certification to run at reliable operating temperatures of 80.6 degrees Fahrenheit (or 27 degrees Celsius) or greater.

Older hardware, which was manufactured before about 2010, was developed based on lower temperature assumptions and therefore does not have the same thermal tolerance or cooling capabilities as newer servers, DeMott explains. Running legacy equipment too hot will increase the likelihood of failures, which can lead to costly unplanned outages and the premature replacement of expensive hardware.

“In many cases, the cost savings associated with reduced cooling costs will be offset by the increased failure rate, from .5 percent annual failure to 2.0 percent annual failure due to excessive heat,” DeMott says. “This is why, before adjusting your thermostat setting, managers should check the manufacturer’s specification for their oldest or mission-critical hardware.”

The good news is that modern servers have far more sophisticated fan control and thermal management, Johnson says. Systems designed in recent years can accept higher inlet temperatures and still deliver stable performance because their firmware, sensors, and fan curves are engineered for wider operating bands.

The challenge isn’t just the hardware itself, Johnson says. Older facilities often have unmanaged airflow paths and recirculation issues. Raising temperatures in those environments doesn’t create efficiency, it exposes the limitations of aging equipment and amplifies the underlying airflow problems that were already there.

The myth of the ideal temperature range

There isn’t one ideal temperature for data center operation, but rather an optimal temperature range that can provide both energy efficiency and equipment longevity.

ASHRAE, as well as other industry groups, have defined a server inlet temperature of 18-27°C (64.4°F-80.6°F) for optimal energy efficiency, DeMott explains. Today, most modern data centers aim to operate at or near the upper end of the recommended temperature range. Therefore, many facilities set their cooling set points at approximately 25°C (77°F).

Although there are significant energy saving opportunities using the 25°C (77°F) temperature versus the previous baseline of 21°C (70°F), there is still ample margin to support the thermal specifications of all enterprise-class computing platforms, DeMott explains. An appropriate ‘ideal’ temperature will need to be determined through continuous monitoring and modeling of the specific data center environment, based on server density, air flow management practices and the data center’s specific cooling systems

In fully air-cooled facilities, the safe operating temperature is dictated by how consistently that air reaches the server inlets, Johnson explains. If airflow supply is equal to or less than the airflow demand, if recirculation is present, or pressure isn’t controlled, raising temperatures introduces risk quickly.

The most reliable method is to engineer the airflow first, implement proper containment, and then use computational fluid dynamics (CFD) modeling to identify the maximum temperature the room can support without losing thermal predictability, Johnson says. The goal isn’t to chase a specific number – it’s to run as warm as possible while maintaining complete stability and staying within the ASHRAE recommended temperature guidelines at the IT intake.

Implementing data center temperature increases

For organizations that wish to try boosting data center heat, Li said her advice is to increase temperatures gradually, not all at once, and to heavily instrument the environment.

Managers should closely monitor GPU thermals, error rates, fan behavior, power supply performance, and rack-level hotspots, Li says. They should also treat older hardware separately because it may require different thermal policies.

“It is important to model the total cost impact rather than viewing temperature increases as a standalone solution,” Li says. “Ultimately, facilities should be designed for higher-temperature operation; it is risky to retrofit a design that was never intended for it.”

The most important advice is to focus on airflow control before increasing the temperature set point, Johnson explains. If cold supply air and hot exhaust air are allowed to mix, you lose thermal predictability immediately. That’s why hot-aisle and cold-aisle containment remain the single most effective way to stabilize an environment and improve cooling efficiency.

“Once airflow is under control, the next step is CFD modeling,” Johnson says. “It lets operators understand the consequences of raising temperature before they make any physical changes. CFD highlights recirculation paths, bypass airflow, and areas where pressure or flow balance needs correction. It gives you a clear picture of how the room will behave under new thermal conditions.”

Finally, the rise of AI workloads has changed the thermal realities of modern data centers, Li says. While running hotter can reduce costs, the long-term hardware impact varies widely depending on architecture and operational discipline. Independent benchmarking and machine-level telemetry are increasingly important in evaluating whether the savings justify the risks, especially as GPUs become one of the most capital-intensive assets in a data center.

The post AI and new tech reignite the data center heating debate appeared first on Spiceworks Inc.

]]>
https://zd-brightspot.s3.us-east-1.amazonaws.com/wp-content/uploads/2026/02/05192250/shutterstock_1250507464-533x400.jpg