Stop Web Scraping Now!

Copying content from the Web can be both a good and bad thing. I first wrote that line back in 2014, way before the era of AI that weaponized scraping. It was once a good thing because scrapers were used by data scientists and journalists to track trends and uncover government abuses. One abuse that wasn’t thought of was the massive CTRL-ALT-DEL of government web data, something that the Data Rescue Project is trying to circumvent. 

In those early days of the twenty-teens, what we mostly worried about was that someone has copied your Web content and is hosting it as their own elsewhere online. This happened with LinkedIn in 2014scrape dashboard2, where someone picked up thousands of personal profiles to use for their own recruiting purposes. That is a scary thought, indeed. I have had my own content copied by spurious sites numerous times too.

And lest you think this is difficult to do, there are numerous automated scraping tools that make it easy for anyone to collect content from anywhere, including BrightData and Scapebox. I won’t get into whether it is ethical to use these on a site that you don’t own the content. Some of these attack sites are very clever in how they go about their scraping, with massive numbers of ever-changing IP addresses blocks to obtain their content and avoid firewalls.

When I wrote about this topic in 2014, we were mostly concerned with how the web search engines would scrape your pages to index and categorize and rank things. I even did a video screencast review of a product called ScrapeDefender, which was clearly ahead of its time, so ahead that it was eventually discontinued.

That seems so quaint now that we have the gigantic AI-based hoovering, which is just the latest example of Scraping Gone Bad. To help out, there are a number of security vendors have arisen to offer protection against bad scraping, including Imperva’s Bot Protection and CloudFlare’s Bot Management.

The problem with AI-fueled scraping and training bots is several fold. First, they operate at tremendous scale, something on the order of a well-crafted DDoS attack. This means that you may need to up your protection service to handle this level of traffic.

Second, these scraping bots don’t all operate the sam way. That means defenders develop different mechanisms to try to stop them. Some site operators may be using something more homegrown, such as custom firewall rules that screen on geolocations or IP addresses. That could be trouble, especially if you don’t set your rules properly and then create all sorts of false positive headaches when you protective measures block out the intentional human visitors. To add insult to injury, most bots ignore the dictates in a robots.txt file, a common preventative measure.

Third, even if these bots are scraping your site at lower traffic levels, they may be harder to detect if your site is still delivering pages to its visitors. This is because some major analytics platforms screen bot traffic out of their counting algorithms by default​, making analysis and accurate reporting of human visitors difficult. In some cases the bot traffic blend​s in with the background noise of the human-created internet.

Next, there is the woeful mess of legal action to try to stop the bots. Few jurisdictions have put anything in place to go after the miscreants, even when you can identify who the source is and whether they are governed by law.

Finally, there could be an added cost to using some of these tools. For example, AWS has its own Web App Firewall that can stop bot traffic, but they charge to implement this service. That means you are paying twice to Amazon: once to receive the bot traffic, and again to stop it. And the scraping can hike up your hosting fees, because often the bots are looking at less-visited pages, pages that can’t be served from cache, or because the bot traffic increased usage costs from the hosting provider. Wiki​pedia ​found that 65% of its most expensive traffic was coming from bots​ scraping less-visited pages. ​

These issues and others were part of a new report which highlights just how hard it is to defend against web scraping at this enormous scale for a very particular audience; the folks who maintain the digital collection​s of various online museums, libraries, archives, and galleries. This group has worked hard to expand their audience across the world, and often done so with limited resources and using what the report calls building ​their sites using  idiosyncratic and technical architecture​s. The report surveyed 43 institutions and describes the extent of the problem​ and outlines some of the countermeasures taken (most of which aren’t effective).

The report poses this question: “iI is in the long-term interest of the entities swarming them with bots to find a sustainable way to access the data they are so hungry for. Will that long-term interest spur action before the online collections collapse under the weight of increased traffic?” This is somewhat ironic: popularity has its price.

The Hidden AI Arms Race

Something remarkable is happening in plain sight, and most people—including the incumbents—aren’t paying attention. While everyone debates whether AI will replace jobs or transform industries, no one is talking about how AI is already systematically displacing the productivity tools we’ve used for decades.

This blog is an experiment in using AI tools. And while normally I write everything you read here manually, today’s entry is 95% fabricated by machines. The following is based on an actual conversation between myself and Bob Matsuoka about the stealth transformation reshaping how we work—and why the tech giants are missing this opportunity. Bob publishes his HyperDev blog, a technical publication focused on practical applications of agentic coding technologies. Drawing on his experience as former Head of Product Engineering at TripAdvisor and CTO of Citymaps, he provides hands-on reviews and strategic insights for developers navigating AI-powered development tools. He currently serves as fractional CTO for multiple companies while actively building with the technologies he covers. We last saw Bob’s work two decades ago when he penned a piece of why the relatively new iPhone was a big deal — for its Clock app.

Bob and I spoke earlier in the week, he recorded our meeting with Granola, a note-taking app. He then copied the transcript into a Claude thread that he uses for this purpose. While that was being developed, he made another Claude project to extract my previous work from my blog posts. We both manually made edits to a GDoc to clean up our respective dialog. We spent about 30 minutes or so on our own with all this work, not counting the hour that we spoke to each other.

Bob: You know, that VisiCalc comparison keeps rattling around in my head. But here’s the thing—what I’m seeing isn’t just a better spreadsheet. Six months ago, if I needed something done, I’d think “Gmail” or “Excel.” Now? Claude is my first stop for everything.

David: So when you say “go first”—you’re talking about bypassing the usual suspects entirely? Gmail, Calendar, the whole productivity stack that we’ve all been living in for the past decade-plus? That’s a pretty fundamental shift in user behavior.

Bob: This local MCP bridge I’ve got running—it connects to 55 different tools. Apple contacts, Google Calendar, Gmail. I can read emails, write responses, schedule meetings, all from what amounts to an AI dashboard. The thing just works.

David: Interesting. So we’re talking about the AI interface becoming your primary operating environment. I remember when browsers started feeling like the “real” desktop for most people—that was maybe 20 years ago? This sounds like the next evolution of that shift, but faster and more comprehensive.

Bob: Pretty much. And here’s what caught me off guard—Anthropic and OpenAI didn’t make any big announcements about this. They just shipped it. Last week, Anthropic rolled out unlimited Retrieval Augmented Generation training. You can dump a thousand documents into a project folder now and it trains on all of them. No press release. No marketing campaign. They’re moving so fast they don’t bother promoting major features.

David: The velocity here is what really gets my attention. I covered Google’s office suite launch back in the day—that was a multi-year enterprise sales campaign with migration consultants, pilot programs, the whole nine yards. These AI companies? They’re just shipping features like its continuous deployment. No marketing blitz, no sales engineering team, no six-month enterprise evaluation cycles. It’s almost like they don’t even realize they’re disrupting a multi-billion dollar market. Every day their AI tools are gaining functions.

Bob: Exactly that. But the underlying architecture is different this time. Let me explain—last month I had this travel client who needed a pricing model. Old me would have built some monster Excel sheet with formulas and maybe Visual Basic scripts. Instead, I uploaded the raw CSV data to a custom GPT, had it write Python code using NumPy, and boom. I had an interactive model where the client could ask “What’s my margin on a five-day trip to Miami during peak season?” No spreadsheet. Just answers.

David: Rather than handing clients a spreadsheet with 47 tabs and saying “good luck figuring this out,” you gave them something that actually responds to questions. That’s a fundamentally different interaction model from what we’ve been calling productivity software for the past couple decades.

Bob: And they never saw a spreadsheet. They just got answers. That’s the shift—from tools that require expertise to interfaces that provide insight.

The Pattern David Has Seen Before

David: I’ve covered enough of these platform transitions to recognize the pattern by now. IBM dominated when mainframes were everything—I remember those room-sized monsters that cost more than most houses. Microsoft crushed them with personal computers (which seemed crazy at the time, if you think about it). Google ate Microsoft’s lunch with web-based productivity tools. Each time I’m watching it happen, I think “this is unprecedented”—but the script keeps repeating itself with different players.

Bob: What’s the pattern you see repeating?

David: It’s the same story every cycle—the incumbent gets comfortable with their revenue streams and loses their edge. IBM couldn’t imagine computing without those room-sized mainframes (which, let’s be honest, still generate fantastic margins). Microsoft initially dismissed web-based productivity as “toys” when Google Docs launched. Now Google’s painted themselves into a corner with the advertising model—everything has to generate data for ad targeting, which fundamentally conflicts with what users actually want from productivity tools. They can’t see past their golden goose to imagine what work looks like when it’s not subsidized by surveillance capitalism.

Bob: That’s exactly what’s happening. But look, there’s another layer here. Google Office replaced desktop software with web apps—same basic concept, different delivery. But AI tools? They’re reasoning engines. They can process context, maintain state across tasks, generate stuff that didn’t exist before. That’s not an upgrade.That’s a different category.

David: Can you walk me through a specific example? I’m trying to understand how this “collaborative intelligence” differs from, say, really sophisticated autocomplete or code suggestion tools. Where’s the line between automation and genuine collaboration?

Bob:  I built this MCP Gateway project with pretty open-ended instructions—basically told it “solve problems however you think makes sense.” The thing that gets me is how it discovers its own solutions. Like, I asked it to set up a reminder for me, expecting it would use my Google Tasks tool. But because I said “reminder,” it went off and found the Mac OS Reminders app on its own, wrote AppleScript code to access it, and created the reminder there. I never told it about the Reminders app. It just discovered that path. That’s what I mean by collaborative intelligence—not that it’s actually intelligent, but it’s genuinely good at discovering new ways to solve problems. It doesn’t just follow patterns; it finds solutions I wouldn’t have thought of.

David: That’s helpful context, but help me connect this back to the productivity software question. How does understanding—excuse me, analyzing—code architecture translate to replacing the basic office tools that most knowledge workers live in every day? Excel, Google Docs, PowerPoint presentations?

Bob: The AI doesn’t just format code or fix syntax errors. I’ve configured it to analyze what I’m trying to build. It refactors entire architectures, suggests performance optimizations, debugs problems I didn’t know existed. That’s not a better text editor—that’s collaborative intelligence.

Why the Incumbents Are Missing It

David: Here’s what puzzles me about this whole situation—Google’s got more money than they know what to do with, they’ve practically got a monopoly on search, they hired half the world’s AI researchers, they invented the transformer architecture that makes all this possible. They certainly had plenty of AI-themed announcements at their IO conference recently, so they are building stuff. But they should own this space, not just be another player.

Bob: Two things. First, they’re trapped by their own success. Their business model depends on keeping people in their ecosystem, showing ads, and collecting data. But AI-first productivity doesn’t fit that model. When I ask Claude to analyze my calendar and suggest optimizations, the last thing I want is ads mixed into that analysis. Second, they’re thinking about AI as a feature to bolt onto existing tools. Google’s putting AI into Docs and Sheets. Microsoft’s adding Copilot to Office. That’s like adding internet features to desktop software in 1995. They’re missing that the entire interface paradigm is changing.

David: Yes, there’s also the organizational antibody problem that I’ve documented at pretty much every large tech company I’ve covered over the decades. You’ve got thousands of engineers working on incremental improvements to Gmail, Docs, Sheets—products that generate billions in revenue and support entire divisions. Walk into a meeting and suggest “let’s cannibalize all this for something completely different” and watch how fast you get politely but firmly shown the door. (Even if you’re absolutely right about the technology direction.) The incentive structures just don’t support that kind of self-disruption, especially when the current products are still growing and profitable.

And that brings us back to the deployment velocity problem. These AI companies are shipping major new capabilities weekly, sometimes multiple times per week. Google’s enterprise software division? They’re operating on quarterly release cycles if you’re lucky. That’s not just a technology gap—that’s a fundamental cultural and organizational chasm that’s very difficult to bridge in large organizations.

Bob: Plus, they’re not constrained by existing user expectations. When Anthropic ships a new capability, users adapt their workflows to take advantage of it. When Google changes something in Gmail, users complain that their familiar interface looks different.

The Enterprise Implications

David: Let’s shift focus to enterprise implications, because this is where things get really messy. I’ve been getting calls from CIOs and IT directors who are completely caught off guard by this bottom-up adoption pattern. Their employees are using these AI tools for mission-critical work—and reporting dramatic productivity gains—but IT has zero visibility into data governance, security policies, or compliance implications. It’s shadow IT on steroids, and frankly, most organizations aren’t equipped to handle it. This sounds familiar to those of us that started using PCs back in the 1980s.

Bob: That’s the other thing that’s different about this transition. Previous platform shifts were top-down. IT departments evaluated productivity suites, negotiated enterprise contracts, managed rollouts. But AI tools are getting adopted bottom-up by individual workers who see immediate gains. I know developers using Claude or Cursor for all their coding because it makes them probably 3x more productive. They’re not waiting for company approval. They’re just using the tools and dealing with governance later.

David: Shadow IT on steroids—and I say this as someone who’s been tracking unauthorized technology adoption in enterprises since people were sneaking Dropbox accounts onto corporate networks back in 2008 and spreadsheets in the 1990s. The scale and speed of this AI tool adoption is unlike anything I’ve documented before.

This puts IT departments into an impossible situation: they can try to block these tools and essentially tell their organization “we’d rather be less productive, thank you very much.” Or they can try to manage technologies they don’t understand well enough to create appropriate governance policies. I’ve watched CIOs try to navigate this, and honestly, it’s not pretty. The usual enterprise software playbook—pilot programs, vendor evaluations, compliance frameworks—doesn’t work when your employees are already using these tools daily and seeing immediate benefits.

Bob: Here’s what the AI companies figured out. They’re offering enterprise versions with security features, compliance controls, audit trails. But the value proposition isn’t “replace your existing tools.” It’s “make your people probably more effective at their jobs.” Hard to argue with that.

David: What’s your timeline prediction here? Because in all my years covering enterprise software transitions—and I’ve tracked everything from mainframe migrations to cloud adoption—they usually take forever. Even cloud-based email adoption still took most enterprises three to five years to complete. But this feels fundamentally different from anything I’ve documented before.

Bob: Based on what I’m seeing with my clients? Eighteen months, maybe less. When I can build sophisticated data analysis in three hours that used to take weeks, I’m not going back. And once you experience that kind of leverage, everything else feels like working with oven mitts on.

What Comes Next

David: What should people in the industry be watching for? What are the early warning signs that this shift is accelerating?

Bob: Multi-agent systems. Right now, most people use AI tools one at a time—Claude for writing, Cursor for coding, maybe Perplexity for research. (Though in fact, both Anthropic and OpenAI are actually using multi-agent systems in their desktop tools now.) But I’m starting to see orchestration tools that coordinate multiple AI agents on complex tasks. That’s when this really explodes, because you’re not just replacing individual productivity tools. You’re replacing entire workflows.

David: And what about the incumbent giants? Google, Microsoft—what’s their play here, assuming they wake up before it’s too late?

Bob: They need to pick a lane. They can survive losing the innovation high ground—IBM did, (old) Microsoft did. Both companies are more profitable now than when they dominated. But if they want to remain relevant for the next generation of workers, they need to build AI-native productivity platforms, not bolt AI features onto legacy products. The talent migration tells the story—I’m seeing engineers leave Google and Microsoft for AI companies, not because of money but because that’s where the interesting problems are.

That’s the real signal right there. When the smartest people in your industry start working elsewhere, you’re no longer the future. You’re just the past that happens to be profitable.

The Bottom Line

As our conversation wound down, I kept thinking about David’s observation that this transition feels both familiar and unprecedented. The pattern of disruption—incumbent complacency, technological shift, talent migration—follows a predictable script. But the speed and scope of change feels fundamentally different.

“We’re not just watching productivity software get replaced,” I found myself reflecting as we wrapped up. “We’re watching the entire concept of ‘software’ evolve into something more like collaborative intelligence.”

The implications extend far beyond which productivity suite you use. We’re looking at a fundamental shift in how humans and machines work together, happening largely under the radar while everyone debates the bigger questions about AI’s future.

The companies that recognize this shift early—and adapt their workflows, their hiring, their entire approach to knowledge work—will have a massive advantage. Those that don’t may find themselves wondering how they went from industry leaders to legacy providers in the span of a few quarters.

The arms race is real. It’s happening now. And the winners aren’t necessarily the companies with the biggest marketing budgets or the most enterprise sales reps. They’re the ones building the tools that knowledge workers reach for first every morning.

My love affair with MS-DOS

I wrote this in 2011 when I was running a piece of the ReadWrite editorial, and recently discovered it. Other than making a few corrections and updating the dates, I still share the sentiment.

Can it be that DOS and I have been involved with each other for more than 40 years? That sounds about right. DOS has been a hard romance, to be sure.

Back then, I was a lowly worker for a Congressional research agency that no longer exists. I was going to write “a lowly IT worker,” until I remembered that we didn’t have IT workers 40+ years ago: Information Centers really didn’t come into vogue for several years, until the IBM PC caught on and corporations were scrambling to put them in place of their 3270 mainframe terminals. Back in 1981, we used NBI and Xerox word processors. These were big behemoths that came installed with their own furniture they were so unwieldy. We had impact printers and floppy discs that were eight inches in diameter. The first hard drives were a whopping 5 MB and the size of a big dictionary. But that came a few years later.

At the agency, one of the things that I figured out was how to hook up these word processors to a high-speed Xerox printer that also was the size of a small car. We had to use modems, as I recall: you know those things beeped that used to be included on every PC? When was the last time you used a PC with an internal modem, or a floppy disc? I can’t remember, but it has been probably more than a decade for both. Remember the hullabaloo when Apple came out with a laptop without a floppy? Now we have them without any removable storage whatsoever: they are called iPads. Steve Jobs always was ahead of curve.

Basic MS-Dos Commands - BCA Nepal

Anyway, back to DOS. I used to pride myself on knowing my mistress’ every command, every optional parameter. And we had EDLIN, a very primitive command line editor. It wasn’t all that hard – there weren’t more than a dozen different commands. (Of course they are preserved by Wikipedia.) When a new version came out, I studied the new manuals to ferret out tricks and hidden things that would help me slap my end users who would love to do format c:/s and erase their hard drives.

And new versions of DOS were a big deal to our industry, except for DOS 4, which was a total dog. One of my fondest memories of that era was going to the DOS 5 launch party in the early 1990s: Steve Ballmer was doing his hyperkinetic dance and sharing the stage with Dave Brubeck. To make a point of how bad DOS 4 was Brubeck tried to play “Take Five” in 4/4 time, before switching to 5/4 time as it was intended. Those were fun times.

But DOS wasn’t enough for our computers, and in the late 1980’s Microsoft began work on Windows. By 1990, we had Windows v3 that was really the first usable version. By then we also had the Mac OS for several years and graphical OS’s were here to stay. DOS went into decline. It didn’t help that a family feud with DR DOS kept many lawyers engaged over its origins either. As the 1990s wore on, we used DOS less and less until finally Windows 95 sealed its fate: the first version of Windows that didn’t need DOS to boot.

I won’t even get into OS/2, which had a troubled birth coming from both IBM and Microsoft, and has since disappeared. My first book, which was never published, was on OS/2 and was rewritten several times as we lurched from one version to another, never catching on with the business public.

Once PC networks caught on, DOS wasn’t a very good partner. You had 640 kilobytes of memory – yes, KB! — and network drivers stole part of that away for their own needs. Multitasking and graphical windows also made us more productive, and we never looked back. For a great ten minute video tour and trip down memory lane, see this effort by Andrew Tait showing successive upgrades of Windows OS .

But DOS was always my first love, my one and true. I still use the command line to ping and test network connectivity and to list files in a directory. There is something comforting about seeing white text on a mostly black screen.

Yes, we haven’t been in touch in many years, and now when I need a new OS I just bring up a VM and within a few minutes can have whatever I need, without the hassle of CONFIG.SYS or AUTOEXEC.BAT. (Here is a column that I wrote a few years ago about getting Windows NT to work in a VM.) But happy birthday, DOS, and thanks for the memories. It’s been lots of fun, all in all.

What becomes editorial collaboration most?

I have written several times about the technologies and processes that enable collaboration (talking about the former here for SiliconAngle in 2023 and appropriate tools here for Biznology’s blog in 2021). My reason for writing about this now is having known and worked around and for Dylan Tweeny for many decades, I was interested in a report he released today about the state of editorial content. In my SiliconAngle post, I describe some of the more successful collaboration efforts down through history, including those working as codebreakers at Bletchley Park WWII and the team behind the 2015 Ford Mustang redesign.

A large part of Dylan’s report deals with the collaborative effort that is involved in producing content in this AI era, based on a self-selected survey of 169 respondents.

In my many years creating content, I have seen plenty of situations where great content is edited into some uninteresting pablum by a group assigned to review my content. Now, I am not a member of “every word out of my word processor is precious” school. But I often cringe when an editor – or a gaggle of them – start reducing the value of my content rather than adding to it.

Part of the problem here is understanding where and when the “collaboration” actually begins: is it at the first light of an assignment where 15 stakeholders weigh in on a Zoom call what their first draft will be? That is unworkable, as many of Dylan’s respondents say with the “too many cooks” comments. Or when someone in the workflow wants to back up and start from another POV that wasn’t recognized initially.

Many people characterize collaboration as a team sport. However, the analogy breaks down when we look more closely. In sports, you have definite rules of play, which is mostly missing in content creation. You also have well defined roles — also MIA. You have leaders that delegate specific tasks (at least the better ones do), but this often is hard to define in the content creation biz. So yes, you need a team. But the idea that “everyone thinks they are an editor, and thinks they are good at it” is just wrong-headed. Eventually, the ref must blow the whistle and play resumes. A better analogy would be a “team of rivals” (apologies to Doris) that have to work together and make up the roles, rules, and who is in charge as they go along.

I don’t think all collaboration all the time is necessary at all stages of the creation of content. At some point, an individual author needs to synthesize all the (often conflicting) points and produce something which tells the story and connects with the eventual audience. This is why AI in its current incarnation is a total fail. In writing my stories I have had interviews of my sources that directly contradict each other, or at least at first blush. Dive deeper, and the devil is in the details. That is what makes my experience valuable — yes, you can assign a 20-something to do the initial interviews, but they would probably miss these details. So finding sources and knowing the questions to ask are key ways I collaborate,

Certainly, we need to adopt better project management tools and use them more effectively. Dylan’s report shows that many content creators still use simple things such as GDocs, with a light seasoning of Grammerly. (A side ironic note: GDocs didn’t have real-time collaboration features when it was initially created, until Google purchased that technology and incorporated it into the service.) There is still plenty of content which is created by having serial edits being passed back and forth via email. That is still the most common PM method that I see being used in my clients. Maybe I have the wrong clients <G>.

Part of the challenge here is that having true editorial management is a lost art. Remember when our pubs had copy desks to manage their workflows? When was the last time a PR agency had something similar? Now that anyone can push a button and post something online, it means that managing this process involves saying “don’t push the publish button quite yet.” Most of the big b2b tech sites that I have worked for in the past couple of decades have a “post first, copy edit and fix later” philosophy. That is no way to manage anything. When I worked for ReadWrite back around 2012 and ran a bunch of their b2b websites, I had one writer who refused to acknowledge that he worked for me. He was a free agent, and damn the torpedoes, full steam ahead and how dare I mess with his golden prose? That was an impossible situation but was tolerated because he wrote (a lot of) good stuff.

Another challenge is relying on meetings as a collaboration path, either virtual or in place. That requires skill, something we don’t all have to bring the best collaboration forward from all participants.

Sharing is more than caring. And the real challenge is that collaboration usually carries what is called a “work tax” because the tools mentioned take the creator out of the context of creation and divert the heat of the creative workflow by adding an explicit sharing step. And it is ironic that the people who would know tech better are often the ones paying the highest tax.

Red Cross provides help and hope to St. Louis man displaced by May tornado

Steven Reason in a shelter after being misplaced from a tornado

Steven Reason is one of the many St. Louis residents displaced by the May 16th tornado that tore through the metropolitan area. He found refuge at one of the American Red Cross shelters set up immediately after the storm. He was watching a YouTube video and dozing off “when I heard the warning on my phone,” he said. “I could hear the wind blowing outside my apartment. I share his story on the Red Cross chapter blog here.

The tornado passed within a few blocks of my home, fortunately, I wasn’t affected other than a few hours without power. But thousands of my fellow St. Louisians have been rendered homeless or have other dire circumstances. Please support your local Red Cross if you don’t live in the area, and if you have the ability to volunteer your time here, contact me if you are interested in helping. There is much to be done.

CSOonline: Threat Intelligence Platforms Buyer’s Guide

The bedrock of a solid enterprise security program begins with the choice of an appropriate threat intelligence platform (TIP) and then to use this to design the rest of your program. Without the TIP, most security departments have no way to integrate the various component tools and develop the appropriate tactics and processes to defend their networks, servers, applications and endpoints.

What is newsworthy is that the threat universe has gotten a lot more complex and focused. For example, the Verizon VDBIR found that threats aimed at VPN and edge devices have surged to more than eight times what was reported last year.

The early TIPs were very unsophisticated products, often just cobbled together intelligence feeds of the latest exploits, with little or no details. Today’s TIP has a lot richer information, including underlying complexities and specifics about how the threat operates I talk about what some of these are in my latest post for CSOonline, along with short summaries of several TIPs from Bitsight, Cyware, Greynoise, Kela, Palo Alto Networks, Recorded Future, SilentPush and SOCRadar.

Learning from Abe and Agatha

What do Abe Lincoln and Agatha Christie have in common? Let me rephrase that: what do actors who have portrayed both historical figures have in common? Both have used advanced technologies to help make their performances more believable and interesting to modern audiences. Let’s take a closer look at what is going on.

Last week, the NYTimes wrote a piece about a new series of lectures available on the BBC’s Maestro series delivered by Christie. Well, by the actress Vivian Keene who plays the part of the noted author and playwright. The lectures, which are available as a series here, are designed to teach you how to write fiction, and in particular crime fiction. They were assembled from her words and writings over her career, including the specifics of how she chooses her plots and characters. The lectures are supplemented with a variety of written exercises and other materials, and here you can watch an introductory episode for free. Otherwise, you need to pay about $90 for the whole lot. Keene is just one of many dozens of contributors that helped pull this production together. And what you hear is not her own voice but speech which has been altered by an AI tool to make it sound more like Christie. (BTW, there are a number of these tools available, such as from ElevenLabs.io, Hume.ai, and ReplicaStudios.com, just to name a few that I have used.)

I found the first few minutes of the freebie lecture interesting, but then my attention waned and I thought this wasn’t very compelling. Perhaps because I try not to write fiction I wasn’t really interested in spending the dough and watching the paid lectures. But I did like the fact that the underlying tech isn’t really all that noticeable, which I guess is the whole point of the BBC exercise.

It reminded me of watching Hal Holbrook play Mark Twain for many decades. Indeed, when you think of what Twain looked like, you probably recall the image of one of Holbrook’s performances. But he wasn’t trying to teach us anything about how Twain wrote his books, just entertain us.

Let’s move on to Lincoln and a production called “Ghosts of the Library” at his own presidential museum in Springfield, Ill. I saw it many years ago. To be accurate, Lincoln is not the star attraction of the show, who instead is an actor playing a librarian and who explains his craft. What is interesting about this live performance is that it makes use of a variety of technologies, one of which dates back to the Civil War era, to project holographic images on a glass panel which separates the set from the audience. (Back then the projections used candlepower rather than electricity.)

At the Lincoln museum, the technology is also not immediately apparent, and when I saw the show I was accompanied by the PR person who took time to explain it to me and introduce me to the actor. The show depends on split-second timing and the ability of the actor to hit his marks exactly so that he matches up with the projected images.

The other exhibits at the Lincoln museum are notable in how they used various tech trickery, something that I wrote about for a 2008 piece in the NYTimes. For example, when entering the room containing the exhibit about Lincoln’s death, the thermostats are set to cool the room so you feel a bit of a chill.

Tech will continue to improve the audience experience to be sure. The real effort is to make it part of the production background, and to do that right requires a lot of time and effort and expertise. If you do sign up for the BBC lectures, do let me know what you think.

How to become an American ex-pat

I have known Rich and Marcia, an American married couple, for decades, both of whom work in tech. This is their story about how they decided to pack up and move to a suburb of Lisbon Portugal. I recently interviewed them via Zoom.

They started looking at so-called Golden Visas back in 2016. (You can figure out the significance of that date.) This type of visa is a way for ex-pats to emigrate to a country, with sufficient means to live there permanently. The couple became interested in Portugal and vacationed there a few times before the Covid lock-downs. “There was no second choice country,” Marcia said. They got temporary resident status last summer, and moved there for good in March. They are both about my age, and mostly retired, although Rich continues to work in tech a few days a week. “We wanted a higher quality of life, and wanted to find a place where people aren’t as obsessed with work or doing the Silicon Valley 24/7 hustle to get ahead,” they said. So far things are working out — “We are eating better things that taste better and cost much less.”

Well, almost. “March was horrible,” she said. “Our cognitive load was intense, and we found bunching up errands wasn’t going to work. We had to spread them out more.” Part of the problem is the way Portuguese do things is somewhat different, such as activating a credit card (you first use it to buy something in an actual store, then go through the activation process in person) or doing more business f2f, or navigating governmental processes that involve multiple forms and understanding the sequence involved. (I would say I have the same problem dealing with the City of St. Louis.)

But since then things have gotten better. “It really changes your nervous system, and we are a lot more chill here than in the States,” she said. As an example, the recent extended power outage wasn’t any big deal for either the couple or their neighbors.

Here are some lessons they have learned:

  1. If you make the move, understand that you aren’t going to replicate your American lifestyle. “You need to figure out what the natives do and how they live, and build on that,” they said.
  2. Throw away any preconceptions that you have formed before the move. “Most of them were flat out wrong,” they said. Amazon Prime next-day delivery? Not in Portugal. Packages will arrive seemingly at random. (I could say the same thing about my own Prime service, sometimes.) Two-factor authentication works slightly differently too. Milk is sold via shelf-stable ultra-pasteurized containers, not in the fridge aisle. There is less of a selection of consumer goods but a wider range of less expensive options to balance it out.
  3. Be patient with building your personal friend community. “There is a lot more emphasis on f2f interactions,” they said, and also your community might end up spreading across several cities or even countries. Expect this to take months if not years to build up your network. They initially picked their location because of a large ex-pat English-speaking community, which while true still will take time to find their peeps.
  4. Understand your relationship dynamics as a couple. If you are thinking about moving as a couple, realize that after you move you will be dependent on each other for large portions of the day and the majority of situations. If you are used to spending time apart for particular activities, that may require some adjustments. “We are a much more tightly bound to each other and have a different dynamic, because we don’t speak the language and have to depend on each other,” she said.
  5. Get your consulting team together to ease the transition, especially if you aren’t fluent in the language. The couple does speak a little Spanish, which is a different language from Portuguese. “But it is the little things that catch us,” they admitted. Who is on their team? Someone who can help navigate the healthcare system (a combination of private and public providers), someone who can help navigate government paperwork and processes, a relocation consultant that is local to your target area to help set up your household, and a lawyer to handle the initial visa requirements. All of these folks will save you a lot of time and frustration and school you in how Things Get Done.
  6. Don’t make the move all about saving money. Yes, plenty of places are less expensive than the States (which the couple charmingly refer to as “the old country”), but not necessarily by a big enough discount. And not necessarily uniformly across all expense categories. For example: healthcare. “We get ten times the service at a tenth of the cost of what we had in the States,” Rich said. “Healthcare is a human right here,” said Marcia.

Book review: Hive by DL Orton

The book Hive by DL Orton is a winner. We have time travel, the multiverse, a romance that spans decades, and an evil billionaire who is trying to conquer the world with his technology. There is something in this sci-fi novel for everyone, and I thoroughly enjoyed it. The couple at the center of the plot finds themselves in dire circumstances, trapped in an biosphere-like structure with a snappy AI companion that is part HAL, part Hitchhiker’s Marvin, only in much better moods. There are cameo references to that and other seminal sci-fi works to further delight readers. I won’t tell you how it ends, but the couple’s journey through multiple universes and space time is interesting, and delightful Highly recommended.

CSOonline: Top tips for successful threat intelligence usage

Enterprises looking to stem the tide of breaches and attacks usually end up purchasing a threat intelligence platform (TIP). These can take one of several forms, including a managed cloud-based service or a tightly coupled tool collection that provides a wider risk management profile by tying together threat detection, incident response and vulnerability management. More than a dozen vendors offer TIPs, and I will be posting my buyer’s guide in a few weeks that go into more details of some of them. In the meantime, you can examine my top tips tor selecting a TIP here on CSOonline.