How IT can learn from Target and Walmart

With all the holiday shopping happening around now, you probably have visited the websites at Target and Walmart, and maybe that prime Seattle company too. What you probably haven’t visited are two subsidiary sites of the first two companies that aren’t selling anything, but are packed with useful knowledge that can help IT operations and application developers. This comes as a surprise because:

  • they both contain a surprising amount of solid IT information that while focused on the retail sector have broader implications for a number of other business contexts
  • they deal with many issues that are at the forefront of innovation, (such as open source and AI) not something normally associated with either company
  • both sites are a curious mixture of open source tool walkthroughs, management insights, and software architecture and design.
  • many of the posts on both sites are very technical deep dives into how they actually use the software tools, again not something you would ordinarily think you could find from these two sources

Let’s take a closer look. One post on Target’s site is by Adam Hollenbeck, an engineering manager. He wrote about their IT culture: “If creating an inclusive environment as a leader is easy for you, please share your magic with others. The perfect environment is a challenge to create but should always be our north star as leaders.” Mark Cuban often opines on this subject. Another post goes into details about a file analysis tool that was developed internally and released on open source. It has a user-friendly interface specifically designed to visualize files, their characteristics, and how they interconnect.

Walmart’s Global Tech blog site goes very heavy into its AI usage. “AI is eliminating silos that developed over time as our dev teams grew”, Andrew Budd wrote in one post, and GenAI chatbot solutions have been rolled out to optimize Walmart’s Developer Experience, a central tool repository. There are also posts about other AI and open source projects, along with a regular cyber report about recent developments in that arena. This is the sort of thing you might find on FOSSForce.com or something like TheNewStack, both news sites.

Another Walmart article, posted on LinkedIn, addresses how AI is changing the online shopping experience this season with more personalized suggestions and predictive content, (does this sound familiar from another online site?) and mentions how all Sam’s Club stores have the “just walk out” technology that was first pioneered by Amazon. (I wrote about my 2021 experience here.)

One other point: both of these tech sub-sites are not easily found: tech.target.com (not to be confused with techtarget.com) and tech.walmart.com — have no link from either company’s home pages. ” I’m not sure these pages should be linked from the home pages,” said Danielle Cooley, a UX expert whom I have known for decades. “As cool as this stuff is for people like you and me and your readers, it’s not going to rise to home page level importance for a company with millions of ecommerce visitors per day.” But she cautions that finding these sites could be an issue. “I did a quick google of ‘programming jobs target’ and ‘cybersecurity jobs target’ and still didn’t get a direct link to tech.target.com so they aren’t aiming at job openings. But also, the person interested in cybersecurity will not also the person interested in an AI shopping assistant for example.” Given their specificity, even if a visitor lands on them, they still might go away frustrated because the content is pretty broad.

You’ll notice that I haven’t said much about Amazon here. It really isn’t fair to compare the two tech sites to what they are doing, because of Amazon’s depth in all sorts of tech knowledge. And to be honest, in my extended family, we tend to shop more at Amazon than either Target or Walmart. But it is nice to know that both Target and Walmart are putting this content out there. I welcome your own thoughts about their efforts.

A very practical business book to help spur innovative thinking

The Imagination Emporium by Duncan Wardle is an interesting business book. Unlike many books that fall flat after a solid first chapter full of suggestions on how to improve your workaday life, Wardle’s book is a solid construction that is chock full of real tools to help you figure out new ideas, sort and rate them and act on the best after building a consensus from various stakeholders. It is a “What Color is Your Parachute” reimagined for the digital, collaborative age, and like Parachute contains some simple but very effective ideation exercises. The trick is to actually stop reading and work through them to generate the ideas yourself.

Wardle was former head of innovation and creativity for Disney, and the design of the book’s pages show exactly how creative and clever he can be at getting you to use his tools. It starts off by exploring the “river of thinking,” where colleagues shoot down your ideas because that isn’t the Way Things Have Been Done or because There isn’t Any Budget or No, Because types of replies. Sound familiar? We have all been there.

Wardle says that our imaginations began to be stifled the day we went to first grade and told to color between the lines. That attitude creates a river of negative thinking that has lasted all of our lives. Another example — don’t call the person sitting by your office’s front door a receptionist, but as “Director of First Impressions.” See what that does? The person becomes empowered to do something important.

None of us go to work today and say we are going to kill a bunch of ideas, and yet, that is what we all do. Wardle’s book will get you to go to “Yes, and” and become better idea nurturers by building a team of diverse opinions and perspectives, and have naive experts that can stimulate your discussions.

Once again, priceless isn’t a marketing strategy

I want to introduce you to one of St. Louis’ premier restaurants, Charlie Gitto’s. It has been around for decades and you can be sure of a great meal, with great service and a quiet place where you can hear your tablemates. So go on over to their website and check out their menu. I’ll wait while you take a look.

 

What don’t you see on their menu? Prices! Now lest you think that this is common among top-tier restaurants, I did a quick check and found many of their competitors have prices listed, some who charge even more than Charlie Gitto’s does.

My interior designer wife reminded me of another example. We have been to one of her lighting supplier stores. There are no printed prices on any of the items in their showroom. Instead, there are QR codes that you can scan for the retail prices. Imagine looking at a dozen lamps or whatnot: this gets tedious really quickly. Far easier to just bring up the Google pages.

This is not a new subject for me. I wrote about this back in 2011 when I said priceless is not a marketing strategy. Back then, I wrote that those vendors who don’t publish prices really are unsure about their pricing strategy, and so have instructed their PR firm or marcom team to just omit this information and see what the reaction is by potential customers and other related parties. Based on this free research, they will come back and adjust the Web pages and add the appropriate pricing.

Well, I was wrong. These priceless vendors never plan to publish anything publicly. Take a look at these two examples which are long on details on how their prices are calculated without providing any actual dollar amounts.

Tines’ page shows you how many degrees of freedom a price depends on: depending on how you count, there are four basic tiers (one of which is free, kudos to them), and seven different add-on tools, and five different usage tiers and you get at least 140 different prices, and then a note saying that older customers are on a different pricing model. Yikes!

I was eventually able to squeeze out a range from Tines, but it took several emails. Now I realize that posting a fancy restaurant’s menu and posting a $500,000 or so enterprise security service are different things, but not really. What if when you came into the restaurant, and they presented you with a menu that had different prices for the following

  • If you are going to pay cash, you get a slight discount, since they avoid the credit card processing fee (I have started to see more this situation).
  • If you are going to occupy your table for more than 90 minutes, there will be an add-on per minute charge.
  • If you made a reservation for a certain size party but show up with fewer diners, you will be hit with a surcharge.

You get the point.  Some restaurants are even charging in advance, when you make your reservation. Those are restaurants that aren’t getting my business.

When I first wrote about this situation, I had a lot of comments. One vendor told me they cleaned up their act and thanked me for my POV. One small step for vendorkind. But really folks: the harder you make it for your customers, the fewer customers you will have. And that is something really priceless.

Red Cross blog: The Journey From Intern to Board Member:

Every Red Cross volunteer has a unique background and reason for volunteering. Recent University of Missouri graduate CJ Nesser is no exception and is proof of the younger generation’s desire to take on heavy levels of responsibility and make a difference in the world around them. This is his story about his volunteer efforts, an impressive young man indeed!

 

Time to move away from Twitter

Yes, I know what it is now known as. When the Muskification began two years ago, I wrote that this was the beginning of its demise. I said then, “Troll Tweeting by your CEO is not a way to set corporate (or national) policy.” How true, even now.

Since then, I haven’t posted there. I still have my account, mainly because I don’t want anyone else with my name to grab it. But I have focused my efforts in content promotion over on LinkedIn. This week I give a more coherent reason why you might do the same and follow in the footsteps of The Guardian, who announced they are moving off the platform earlier this month. They said, “X now plays a diminished role in promoting our work.”

I got a chance to catch up with Sam Whitmore in this short video podcast. We discuss why PR pros should follow my example. Sam and I go way back nearly 40 years, when we both worked as reporters and editorial managers at PC Week (which has since been unsatisfactorily renamed too). Sam takes the position that PR folks need to stick with Twitter because of historical reasons, and because that is where they can get the best results of coverage by their clients and keep track of influential press people. I claim the site is a declining influence, and so toxic to anyone’s psyche, let alone their client’s brand equity.

In January 2023, I wrote a series of suggestions on Twitter’s future, including how hard it will be to do content moderation (well, hard if they actually did it, which they apparently don’t) and how little operational transparency the social media operators now have.

Since then, Twitter has become the platform of outrage. As my colleague Scott Fulton points out, this is different from encouraging engagement.  If I state a point of view on X, the only way I can expect my statements to be amplified is if they can be rebutted or maybe repudiated.” My colleague Tara Calishain pointed me to a post on The Scholarly Kitchen, where several of its contributors point out their own movements away from Twitter.

Is Sam or I right? You be the judge, and feel free to comment here or on LinkedIn if you’d like.

CSOonline: How to pick the best endpoint detection and response solution

Endpoint detection and response (EDR) security software has grown in popularity and effectiveness as it allows security teams to quickly detect and respond to a variety of threats. EDR software offers visibility into endpoint activity in real time, continuously detecting and responding to attacker activity on endpoint devices including mobile phones, workstations, laptops, and servers.

In this buyer’s guide for CSOonline, I explain some of the benefits, trends, and questions to ask before evaluating any products. I also briefly touch upon six of the more popular tools. One of them, Palo Alto Networks’ Cortex XDR, has a dashboard that looks like the below screencap.

 

How to succeed at social media in this age of outrage

Samuel Stroud, the British blogger behind GiraffeSocial has posted a column taking a closer look at how TikTok’s algorithm works — at least how he thinks it works. But that isn’t the point of the post for you, dear reader: he has some good advice on how to improve your own social media content, regardless of where it lands, and how it is constructed.

Before I get to his suggestions, I should first turn to why I used the word outrage in my hed. This is because a new Tulane University study shows that people are more likely to interact with online content that challenges their views, rather than agrees with them. In other words, they are driven by outrage. This is especially true when it comes to political engagement, which often stems from anger, and fuels a vicious cycle. I realize that this isn’t news to many of you. But do keep this in mind as you read through some of Stroud’s suggestions.

You might still be using Twitter, for all I know, and are about to witness yet another trolling of the service by turning all user blocks into mutes, which is Yet Another Reason I (continue to) steer clear of the thing. That, and its troller-in-chief. So now is a good time to review your social strategy and make sure all your content creators or social managers are up on the latest research.

Stroud points out several factors to keep track of:

  • Likes, shares and comments: the more engagement from others, the higher a post is promoted. And this also means you should respond to the comments too.
  • Watch time: Videos that are watched all the way through get boosted
  • New followers: posts that generates new followers signing up also get boosted
  • More meta is betta: Captions, keywords, hashtags, custom thumbnails — all of these help increase engagement, which means paying attention to these “housekeeping” matters almost as much as the actual content itself.
  • Your history matters: if you have had previous interactions with this creator, type of content, or other trackable habits

Now, most of this is common sense, and perhaps something you already knew if you have been using any social media platform anytime over the last couple of decades. But it still is nice to have it all packaged neatly in one place.

But here is the thing. The trick with social media success is being able to balance your verisimilitude with your outrage. It is a delicate balance, particularly if you are trying to promote your business and your brand. And if you are trying to communicate some solid info, and not just fuel the outrage fires, then what Stroud mentions should become second nature to your posting strategy.

Time to do an audio audit

I am not a tin-foil-hat kind of person. But last week, I replaced my voice mail greeting (made in my own voice) with a synthetic voice of an actor saying to leave a message. I will explain the reasoning behind this, and you can decide whether I should now accessorize my future outfits with the hat.

Last month, Techcrunch ran a story about the perils of audio deepfakes and mentions how the CEO of Wiz, an Israeli cybersecurity firm that I have both visited and covered in the past, had to deal with a deepfake phone call that was sent out to many of its employees. It sounded like the CEO’s voice on the call. Almost. And fortunately, enough people at Wiz were paying attention and realized it was a scam. The call was assembled from snippets of a recorded conference session. But even the most judicious audio editing still can’t be perfect, and people at Wiz caught the unnaturalness of the assemblage. The reason for the difference had nothing to do with AI, but everything to do with human nature. This is because his speaking voice is somewhat strained, because he is uncomfortable in front of an audience, and that isn’t his conversational voice.

But it is just a matter of time before the AI overlords figure this stuff out.

AI-based voice impersonations — or deepfakes or whatever you want to call them — have been around for some time. I have written about this technology for Avast’s blog in 2022 here. The piece mentioned impersonated phone calls from the mayor of Kyiv to several other European politicians. This deepfake timeline begins in 2017 but only goes up to the summer of 2021. Since then, there have been numerous advances in tech. For example, a team of Microsoft researchers have developed a text-to-speech program called VALL-E that can take a three-second audio sample of your voice and be used in an interactive conversation.

And another research report, written earlier this summer, “involves the automation of phone scams using LLM-powered voice-to-text and text-to-voice systems. Attackers can now craft sophisticated, personalized scams with minimal human oversight, posing an unprecedented challenge to traditional defenses.” One of the paper’s authors, Yisroel Mirsky, wrote that to me recently when I asked about the topic. He posits a “scam automation loop” where this is possible, and his paper shows several ways the guardrails of conversational AI can be easily circumvented, as shown here. I visited his Ben Gurion University lab in Israel back in 2022. There I got to witness a real-time deepfake audio generator. It needed just a sample of a few seconds of my voice, and then I was having a conversation with a synthetic replica of myself. Eerie and creepy, to be sure.

So now you see my paranoia about my voicemail greeting, which is a bit longer than a few seconds. It might be time to do an overall “audio audit” for lack of better words, as just another preventative step, especially for your corporate officers.

Still, you might argue that there is quite a lot of recorded audio of my voice that is available online, given that I am a professional speaker and podcaster. Anyone with even poor searching skills — let alone AI — can find copious samples where I drone on about something to do with technology for hours. So why get all hot and bothered about my voicemail greeting?

Mirsky said to me, “I don’t believe any vendors are leading the pack in terms of robust defenses against these kinds of LLM-driven threats. Many are still focusing on detecting deepfake or audio defects, which, in my opinion, is increasingly a losing battle as the generative AI models improve.” So maybe changing my voicemail greeting is like putting one’s finger in a very leaky dike. Or perhaps it is a reminder that we need alternative strategies (dare I say a better CAPTCHA? He has one such proposal in another paper here.) So maybe a change of headgear is called for after all.

CSOonline: Top 5 security mistakes software developers make

Creating and enforcing the best security practices for application development teams isn’t easy. Software developers don’t necessarily write their code with these in mind, and as the appdev landscape becomes more complex, securing apps becomes more of a challenge to handle cloud computing, containers, and API connections. It is a big problem: Security flaws were found in 80% of the applications scanned by Veracode in a recent analysis.

As attacks continue to plague cybersecurity leaders, I compiled a list of five common mistakes by software developers and how they can be prevented for a piece for CSOonline.

Me and the mainframe

I recently wrote a sponsored blog for VirtualZ Computing, a startup involved in innovative mainframe software. As I was writing the post, I was thinking about the various points in my professional life where I came face-to-CPU with IBM’s Big Iron, as it once was called. (For what passes for comic relief, check out this series of videos about selling mainframes.)

My last job working for a big IT shop came about in the summer of 1984, when I moved across country to LA to work for an insurance company. The company was a huge customer of IBM mainframes and was just getting into buying PCs for its employees, including mainframe developers and ordinary employees (no one called them end users back then) that wanted to run their own spreadsheets and create their own documents. There were hundreds of people working on and around the mainframe, which was housed in its own inner sanctum, raised-floor series of rooms. I wrote about this job here, and it was interesting because it was the last time I worked in IT before switching careers for tech journalism.

Back in 1984, if I wanted to write a program, I had to first create them by typing out a deck of punch cards. This was done at a special station that was the size of a piece of office furniture. Each card could contain instructions for a single line of code. If you made a mistake you had to toss the card and start anew. When you had your deck you would then feed it into a specialized card reader that would transfer the program to the mainframe and create a “batch job” – meaning my program would then run sometime during the middle of the night. I would get my output the next morning, if I was lucky. If I made any typing errors on my cards, the printout would be a cryptic set of error messages, and I would have to fix the errors and try again the next night. Finding that meager output was akin to getting a college rejection letter in the mail – the acceptances would be thick envelopes. Am I dating myself enough here?

Today’s developers probably are laughing at this situation. They have coding environments that immediately flag syntax errors, and tools that dynamically stop embedded malware from being run, and all sorts of other fancy tricks. if they have to wait more than 10 milliseconds for this information, they complain how slow their platform is. Code is put into production in a matter of moments, rather than the months we had to endure back in the day.

Even though I roamed around the three downtown office towers that housed our company’s workers, I don’t remember ever stepping foot in our Palais d’mainframe. However, over the years I have been to my share of data centers across the world. One visit involved turning off a mainframe for Edison Electric Institute in Washington DC in 1993, where I wrote about the experience and how Novell Netware-based apps replaced many of its functions. Another involved moving a data center from a basement (which would periodically flood) into a purpose-built building next door, in 2007. That data center housed more souped-up microprocessor-based servers which would form the beginnings of massive CPU collections that are used in today’s z Series mainframes BTW.

Mainframes had all sorts of IBM gear that required care and feeding, and lots of knowledge that I used to have at my fingertips: I knew my way around the proprietary protocols called Systems Network Architecture and proprietary networking protocols called Token Ring, for example. And let’s not forget that it ran programs written in COBOL, and used all sorts of other hardware to connect things together with proprietary bus-and-tag cables. When I was making the transition to PC Week in the 1980s, IBM was making the (eventually failed) transition to peer-to-peer mainframe networking with a bunch of proprietary products. Are you seeing a trend here?

Speaking of the IBM PC, it was the first product from IBM that was built with spare parts made by others, rather than its own stuff. That was a good decision, and this was successful because you could add a graphics card (the first PCs just did text, and monochrome at that) or extra memory or a modem. Or a adapter card that connected to another cabling scheme (coax) that turned the PC into a mainframe terminal. Yes, this was before wireless networks became useful, and you can see why.

Now IBM mainframes — there are some 10,000 of them still in the wild — come with the ability to run Linux and operate across TCP/IP networks, and about a third of them are running Linux as their main OS. This was akin to having one foot in the world of distributed cloud computing, and one foot back in the dinosaur era. So let’s talk about my client VirtualZ and where they come into this picture.

They created software – mainframe software – that enabled distributed applications to access mainframe data sets, using OpenAPI protocols and database connectors. The data stays put on the mainframe but is available to applications that we know and love such as Salesforce and Tableau.  It is a terrific idea, just like the original IBM PC in that it supports open systems. This makes the mainframe just another cloud-connected computer, and shows that the mainframe is still an exciting and powerful way to go.

Until VirtualZ came along, developers who wanted access to mainframe data had to go through all sorts of contortions to get it — much like what we had to do in the 1980s and 1990s for that matter. Companies like Snowflake and Fivetran made very successful businesses out of doing these “extract, transfer and load” operations to what is now called data warehouses. VirtualZ eliminates these steps, and your data is available in real time, because it never leaves the cozy comfort of the mainframe, with all of its minions and backups and redundant hardware. You don’t have to build a separate warehouse in the cloud, because your mainframe is now cloud-accessible all the time.

I think VirtualZ’s software will usher in a new mainframe era, an era that puts us further from the punch card era. But it shows the power and persistence of the mainframe, and how IBM had the right computer, just not the right context when it was invented, for today’s enterprise data. For Big Iron to succeed in today’s digital world, it needs a lot of help from little iron.