Once again, priceless isn’t a marketing strategy

I want to introduce you to one of St. Louis’ premier restaurants, Charlie Gitto’s. It has been around for decades and you can be sure of a great meal, with great service and a quiet place where you can hear your tablemates. So go on over to their website and check out their menu. I’ll wait while you take a look.

 

What don’t you see on their menu? Prices! Now lest you think that this is common among top-tier restaurants, I did a quick check and found many of their competitors have prices listed, some who charge even more than Charlie Gitto’s does.

My interior designer wife reminded me of another example. We have been to one of her lighting supplier stores. There are no printed prices on any of the items in their showroom. Instead, there are QR codes that you can scan for the retail prices. Imagine looking at a dozen lamps or whatnot: this gets tedious really quickly. Far easier to just bring up the Google pages.

This is not a new subject for me. I wrote about this back in 2011 when I said priceless is not a marketing strategy. Back then, I wrote that those vendors who don’t publish prices really are unsure about their pricing strategy, and so have instructed their PR firm or marcom team to just omit this information and see what the reaction is by potential customers and other related parties. Based on this free research, they will come back and adjust the Web pages and add the appropriate pricing.

Well, I was wrong. These priceless vendors never plan to publish anything publicly. Take a look at these two examples which are long on details on how their prices are calculated without providing any actual dollar amounts.

Tines’ page shows you how many degrees of freedom a price depends on: depending on how you count, there are four basic tiers (one of which is free, kudos to them), and seven different add-on tools, and five different usage tiers and you get at least 140 different prices, and then a note saying that older customers are on a different pricing model. Yikes!

I was eventually able to squeeze out a range from Tines, but it took several emails. Now I realize that posting a fancy restaurant’s menu and posting a $500,000 or so enterprise security service are different things, but not really. What if when you came into the restaurant, and they presented you with a menu that had different prices for the following

  • If you are going to pay cash, you get a slight discount, since they avoid the credit card processing fee (I have started to see more this situation).
  • If you are going to occupy your table for more than 90 minutes, there will be an add-on per minute charge.
  • If you made a reservation for a certain size party but show up with fewer diners, you will be hit with a surcharge.

You get the point.  Some restaurants are even charging in advance, when you make your reservation. Those are restaurants that aren’t getting my business.

When I first wrote about this situation, I had a lot of comments. One vendor told me they cleaned up their act and thanked me for my POV. One small step for vendorkind. But really folks: the harder you make it for your customers, the fewer customers you will have. And that is something really priceless.

Red Cross blog: The Journey From Intern to Board Member:

Every Red Cross volunteer has a unique background and reason for volunteering. Recent University of Missouri graduate CJ Nesser is no exception and is proof of the younger generation’s desire to take on heavy levels of responsibility and make a difference in the world around them. This is his story about his volunteer efforts, an impressive young man indeed!

 

Time to move away from Twitter

Yes, I know what it is now known as. When the Muskification began two years ago, I wrote that this was the beginning of its demise. I said then, “Troll Tweeting by your CEO is not a way to set corporate (or national) policy.” How true, even now.

Since then, I haven’t posted there. I still have my account, mainly because I don’t want anyone else with my name to grab it. But I have focused my efforts in content promotion over on LinkedIn. This week I give a more coherent reason why you might do the same and follow in the footsteps of The Guardian, who announced they are moving off the platform earlier this month. They said, “X now plays a diminished role in promoting our work.”

I got a chance to catch up with Sam Whitmore in this short video podcast. We discuss why PR pros should follow my example. Sam and I go way back nearly 40 years, when we both worked as reporters and editorial managers at PC Week (which has since been unsatisfactorily renamed too). Sam takes the position that PR folks need to stick with Twitter because of historical reasons, and because that is where they can get the best results of coverage by their clients and keep track of influential press people. I claim the site is a declining influence, and so toxic to anyone’s psyche, let alone their client’s brand equity.

In January 2023, I wrote a series of suggestions on Twitter’s future, including how hard it will be to do content moderation (well, hard if they actually did it, which they apparently don’t) and how little operational transparency the social media operators now have.

Since then, Twitter has become the platform of outrage. As my colleague Scott Fulton points out, this is different from encouraging engagement.  If I state a point of view on X, the only way I can expect my statements to be amplified is if they can be rebutted or maybe repudiated.” My colleague Tara Calishain pointed me to a post on The Scholarly Kitchen, where several of its contributors point out their own movements away from Twitter.

Is Sam or I right? You be the judge, and feel free to comment here or on LinkedIn if you’d like.

CSOonline: How to pick the best endpoint detection and response solution

Endpoint detection and response (EDR) security software has grown in popularity and effectiveness as it allows security teams to quickly detect and respond to a variety of threats. EDR software offers visibility into endpoint activity in real time, continuously detecting and responding to attacker activity on endpoint devices including mobile phones, workstations, laptops, and servers.

In this buyer’s guide for CSOonline, I explain some of the benefits, trends, and questions to ask before evaluating any products. I also briefly touch upon six of the more popular tools. One of them, Palo Alto Networks’ Cortex XDR, has a dashboard that looks like the below screencap.

 

How to succeed at social media in this age of outrage

Samuel Stroud, the British blogger behind GiraffeSocial has posted a column taking a closer look at how TikTok’s algorithm works — at least how he thinks it works. But that isn’t the point of the post for you, dear reader: he has some good advice on how to improve your own social media content, regardless of where it lands, and how it is constructed.

Before I get to his suggestions, I should first turn to why I used the word outrage in my hed. This is because a new Tulane University study shows that people are more likely to interact with online content that challenges their views, rather than agrees with them. In other words, they are driven by outrage. This is especially true when it comes to political engagement, which often stems from anger, and fuels a vicious cycle. I realize that this isn’t news to many of you. But do keep this in mind as you read through some of Stroud’s suggestions.

You might still be using Twitter, for all I know, and are about to witness yet another trolling of the service by turning all user blocks into mutes, which is Yet Another Reason I (continue to) steer clear of the thing. That, and its troller-in-chief. So now is a good time to review your social strategy and make sure all your content creators or social managers are up on the latest research.

Stroud points out several factors to keep track of:

  • Likes, shares and comments: the more engagement from others, the higher a post is promoted. And this also means you should respond to the comments too.
  • Watch time: Videos that are watched all the way through get boosted
  • New followers: posts that generates new followers signing up also get boosted
  • More meta is betta: Captions, keywords, hashtags, custom thumbnails — all of these help increase engagement, which means paying attention to these “housekeeping” matters almost as much as the actual content itself.
  • Your history matters: if you have had previous interactions with this creator, type of content, or other trackable habits

Now, most of this is common sense, and perhaps something you already knew if you have been using any social media platform anytime over the last couple of decades. But it still is nice to have it all packaged neatly in one place.

But here is the thing. The trick with social media success is being able to balance your verisimilitude with your outrage. It is a delicate balance, particularly if you are trying to promote your business and your brand. And if you are trying to communicate some solid info, and not just fuel the outrage fires, then what Stroud mentions should become second nature to your posting strategy.

Time to do an audio audit

I am not a tin-foil-hat kind of person. But last week, I replaced my voice mail greeting (made in my own voice) with a synthetic voice of an actor saying to leave a message. I will explain the reasoning behind this, and you can decide whether I should now accessorize my future outfits with the hat.

Last month, Techcrunch ran a story about the perils of audio deepfakes and mentions how the CEO of Wiz, an Israeli cybersecurity firm that I have both visited and covered in the past, had to deal with a deepfake phone call that was sent out to many of its employees. It sounded like the CEO’s voice on the call. Almost. And fortunately, enough people at Wiz were paying attention and realized it was a scam. The call was assembled from snippets of a recorded conference session. But even the most judicious audio editing still can’t be perfect, and people at Wiz caught the unnaturalness of the assemblage. The reason for the difference had nothing to do with AI, but everything to do with human nature. This is because his speaking voice is somewhat strained, because he is uncomfortable in front of an audience, and that isn’t his conversational voice.

But it is just a matter of time before the AI overlords figure this stuff out.

AI-based voice impersonations — or deepfakes or whatever you want to call them — have been around for some time. I have written about this technology for Avast’s blog in 2022 here. The piece mentioned impersonated phone calls from the mayor of Kyiv to several other European politicians. This deepfake timeline begins in 2017 but only goes up to the summer of 2021. Since then, there have been numerous advances in tech. For example, a team of Microsoft researchers have developed a text-to-speech program called VALL-E that can take a three-second audio sample of your voice and be used in an interactive conversation.

And another research report, written earlier this summer, “involves the automation of phone scams using LLM-powered voice-to-text and text-to-voice systems. Attackers can now craft sophisticated, personalized scams with minimal human oversight, posing an unprecedented challenge to traditional defenses.” One of the paper’s authors, Yisroel Mirsky, wrote that to me recently when I asked about the topic. He posits a “scam automation loop” where this is possible, and his paper shows several ways the guardrails of conversational AI can be easily circumvented, as shown here. I visited his Ben Gurion University lab in Israel back in 2022. There I got to witness a real-time deepfake audio generator. It needed just a sample of a few seconds of my voice, and then I was having a conversation with a synthetic replica of myself. Eerie and creepy, to be sure.

So now you see my paranoia about my voicemail greeting, which is a bit longer than a few seconds. It might be time to do an overall “audio audit” for lack of better words, as just another preventative step, especially for your corporate officers.

Still, you might argue that there is quite a lot of recorded audio of my voice that is available online, given that I am a professional speaker and podcaster. Anyone with even poor searching skills — let alone AI — can find copious samples where I drone on about something to do with technology for hours. So why get all hot and bothered about my voicemail greeting?

Mirsky said to me, “I don’t believe any vendors are leading the pack in terms of robust defenses against these kinds of LLM-driven threats. Many are still focusing on detecting deepfake or audio defects, which, in my opinion, is increasingly a losing battle as the generative AI models improve.” So maybe changing my voicemail greeting is like putting one’s finger in a very leaky dike. Or perhaps it is a reminder that we need alternative strategies (dare I say a better CAPTCHA? He has one such proposal in another paper here.) So maybe a change of headgear is called for after all.

CSOonline: Top 5 security mistakes software developers make

Creating and enforcing the best security practices for application development teams isn’t easy. Software developers don’t necessarily write their code with these in mind, and as the appdev landscape becomes more complex, securing apps becomes more of a challenge to handle cloud computing, containers, and API connections. It is a big problem: Security flaws were found in 80% of the applications scanned by Veracode in a recent analysis.

As attacks continue to plague cybersecurity leaders, I compiled a list of five common mistakes by software developers and how they can be prevented for a piece for CSOonline.

Me and the mainframe

I recently wrote a sponsored blog for VirtualZ Computing, a startup involved in innovative mainframe software. As I was writing the post, I was thinking about the various points in my professional life where I came face-to-CPU with IBM’s Big Iron, as it once was called. (For what passes for comic relief, check out this series of videos about selling mainframes.)

My last job working for a big IT shop came about in the summer of 1984, when I moved across country to LA to work for an insurance company. The company was a huge customer of IBM mainframes and was just getting into buying PCs for its employees, including mainframe developers and ordinary employees (no one called them end users back then) that wanted to run their own spreadsheets and create their own documents. There were hundreds of people working on and around the mainframe, which was housed in its own inner sanctum, raised-floor series of rooms. I wrote about this job here, and it was interesting because it was the last time I worked in IT before switching careers for tech journalism.

Back in 1984, if I wanted to write a program, I had to first create them by typing out a deck of punch cards. This was done at a special station that was the size of a piece of office furniture. Each card could contain instructions for a single line of code. If you made a mistake you had to toss the card and start anew. When you had your deck you would then feed it into a specialized card reader that would transfer the program to the mainframe and create a “batch job” – meaning my program would then run sometime during the middle of the night. I would get my output the next morning, if I was lucky. If I made any typing errors on my cards, the printout would be a cryptic set of error messages, and I would have to fix the errors and try again the next night. Finding that meager output was akin to getting a college rejection letter in the mail – the acceptances would be thick envelopes. Am I dating myself enough here?

Today’s developers probably are laughing at this situation. They have coding environments that immediately flag syntax errors, and tools that dynamically stop embedded malware from being run, and all sorts of other fancy tricks. if they have to wait more than 10 milliseconds for this information, they complain how slow their platform is. Code is put into production in a matter of moments, rather than the months we had to endure back in the day.

Even though I roamed around the three downtown office towers that housed our company’s workers, I don’t remember ever stepping foot in our Palais d’mainframe. However, over the years I have been to my share of data centers across the world. One visit involved turning off a mainframe for Edison Electric Institute in Washington DC in 1993, where I wrote about the experience and how Novell Netware-based apps replaced many of its functions. Another involved moving a data center from a basement (which would periodically flood) into a purpose-built building next door, in 2007. That data center housed more souped-up microprocessor-based servers which would form the beginnings of massive CPU collections that are used in today’s z Series mainframes BTW.

Mainframes had all sorts of IBM gear that required care and feeding, and lots of knowledge that I used to have at my fingertips: I knew my way around the proprietary protocols called Systems Network Architecture and proprietary networking protocols called Token Ring, for example. And let’s not forget that it ran programs written in COBOL, and used all sorts of other hardware to connect things together with proprietary bus-and-tag cables. When I was making the transition to PC Week in the 1980s, IBM was making the (eventually failed) transition to peer-to-peer mainframe networking with a bunch of proprietary products. Are you seeing a trend here?

Speaking of the IBM PC, it was the first product from IBM that was built with spare parts made by others, rather than its own stuff. That was a good decision, and this was successful because you could add a graphics card (the first PCs just did text, and monochrome at that) or extra memory or a modem. Or a adapter card that connected to another cabling scheme (coax) that turned the PC into a mainframe terminal. Yes, this was before wireless networks became useful, and you can see why.

Now IBM mainframes — there are some 10,000 of them still in the wild — come with the ability to run Linux and operate across TCP/IP networks, and about a third of them are running Linux as their main OS. This was akin to having one foot in the world of distributed cloud computing, and one foot back in the dinosaur era. So let’s talk about my client VirtualZ and where they come into this picture.

They created software – mainframe software – that enabled distributed applications to access mainframe data sets, using OpenAPI protocols and database connectors. The data stays put on the mainframe but is available to applications that we know and love such as Salesforce and Tableau.  It is a terrific idea, just like the original IBM PC in that it supports open systems. This makes the mainframe just another cloud-connected computer, and shows that the mainframe is still an exciting and powerful way to go.

Until VirtualZ came along, developers who wanted access to mainframe data had to go through all sorts of contortions to get it — much like what we had to do in the 1980s and 1990s for that matter. Companies like Snowflake and Fivetran made very successful businesses out of doing these “extract, transfer and load” operations to what is now called data warehouses. VirtualZ eliminates these steps, and your data is available in real time, because it never leaves the cozy comfort of the mainframe, with all of its minions and backups and redundant hardware. You don’t have to build a separate warehouse in the cloud, because your mainframe is now cloud-accessible all the time.

I think VirtualZ’s software will usher in a new mainframe era, an era that puts us further from the punch card era. But it shows the power and persistence of the mainframe, and how IBM had the right computer, just not the right context when it was invented, for today’s enterprise data. For Big Iron to succeed in today’s digital world, it needs a lot of help from little iron.

The Cloud-Ready Mainframe: Extending Your Data’s Reach and Impact

(This post is sponsored by VirtualZ Computing)

Some of the largest enterprises are finding new uses for their mainframes. And instead of competing with cloud and distributed computing, the mainframe has become a complementary asset that adds new productivity and a level of cost-effective scale to existing data and applications. 

While the cloud does quite well at elastically scaling up resources as application and data demands increase, the mainframe is purpose-built for the largest-scale digital applications. But more importantly, it has kept pace as these demands have mushroomed over its 60-year reign, and why so many large enterprises continue to use them. Having them as part of a distributed enterprise application portfolio could be a significant and savvy use case, and be a reason for increasing their future role and importance.

Estimates suggest that there are about 10,000 mainframes in use today, which may not seem a lot except that they can be found across the board in more than two-thirds of Fortune 500 companies, In the past, they used proprietary protocols such as Systems Network Architecture, had applications written in now-obsolete coding languages such as COBOL, and ran on custom CPU hardware. Those days are behind us: instead, the latest mainframes run Linux and TCP/IP across hundreds of multi-core  microprocessors. 

But even speaking cloud-friendly Linux and TCP/IP doesn’t remove two main problems for mainframe-based data. First off, many mainframe COBOL apps are their own island, isolated from the end-user Java experience and coding pipelines and programming tools. To break this isolation usually means an expensive effort to convert and audit the code. 

A second issue has to do with data lakes and data warehouses. These applications have become popular ways that businesses can spot trends quickly and adjust IT solutions as their customer’s data needs evolve. But the underlying applications typically require having near real-time access to existing mainframe data, such as financial transactions, sales and inventory levels or airline reservations. At the core of any lake or warehouse is conducting a series of “extract, transform and load” operations that move data back and forth between the mainframe and the cloud. These efforts only transform data at a particular moment in time, and also require custom programming efforts to accomplish.

What was needed was an additional step to make mainframes easier for IT managers to integrate with other cloud and distributed computing resources, and that means a new set of software tools. The first step was thanks to initiatives such as the use of IBM’s z/OS Connect. This enabled distributed applications to access mainframe data. But it continued the mindset of a custom programming effort and didn’t really provide direct access to distributed applications.

To fully realize the vision of mainframe data as equal cloud nodes required a major makeover, thanks to companies such as VirtualZ Computing. They latched on to the OpenAPI effort, which was previously part of the cloud and distributed world. Using this protocol, they created connectors that made it easier for vendors to access real-time data and integrate with a variety of distributed data products, such as MuleSoft, Tableau, TIBCO, Dell Boomi, Microsoft Power BI, Snowflake and Salesforce. Instead of complex, single-use data transformations, VirtualZ enables real-time read and write access to business applications. This means the mainframe can now become a full-fledged and efficient cloud computer. 

VirtualZ CEO Jeanne Glass says, “Because data stays securely and safely on the mainframe, it is a single source of truth for the customer and still leverages existing mainframe security protocols.” There isn’t any need to convert COBOL code, and no need to do any cumbersome data transformations and extractions.

The net effect is an overall cost reduction since an enterprise isn’t paying for expensive high-resource cloud instances. It makes the business operation more agile, since data is still located in one place and is available at the moment it is needed for a particular application. These uses extend the effective life of a mainframe without having to go through any costly data or process conversions, and do so while reducing risk and complexity. These uses also help solve complex data access and report migration challenges efficiently and at scale, which is key for organizations transitioning to hybrid cloud architectures. And the ultimate result is that one of these hybrid architectures includes the mainframe itself.

Distinguishing between news and propaganda is getting harder to do

Social media personalization has turned the public sphere into an insane asylum, where every person can have their own reality. So says Maria Ressa recently, describing the results of a report from a group of data scientists about the US information ecosystem. The authors are from a group called The Nerve that she founded.

I wrote about Ressa when she won the Nobel Peace prize back in 2021, for her work running the Philippine online news site Rappler. She continues to innovate and afflict the comfortable, as the saying goes. She spoke earlier this month at an event at Columbia University where she covered the report’s findings. The irony of the location wasn’t lost on me: this is the same place where students camped out, driven by various misinformation campaigns.

One of the more interesting elements of the report is the crafting of a new seven layer model (no, not that OSI one). This tracks how the online world manipulates us. And starts off with social media incentives which are designed around promoting more outrage and less actual news. This in turn fuels real-world violence, then amplified by efforts of authoritarian-run nations who target Americans and polarize the public sphere even further. The next layer changes info ops into info warfare, feeding more outrage and conflict. The final layer is our elections, aided by the lack of real news and no general agreement on facts.

Their report is a chilling account of the state of things today, to be sure. And thanks to fewer “trust and safety” staff watching the feeds, greater use of AI in online searches by Google and Microsoft, and Facebook truncating actual news in its social feeds and as a result referring less traffic to online news sites, we have a mess on our hands. News media now shares a shrinking piece of the attention pie with independent creators. The result is that “Americans will have fewer facts to go by, less news on their social media feeds, and more outrage, fear, and hate.” This week it has reached a fever pitch, and I wish I could just turn it all off.

The report focuses on three issues that have divided us, both generationally and politically: immigration, abortion, and the Israel/Hamas war. It takes a very data-driven approach. For example, the number of #FreePalestine hashtag views on TikTok outnumber #StandWithIsrael by 446M to 16M and on Facebook and Twitter the ratio is 18 and 32 times respectively. The measurement periods for each network varies, but you get the point.

The report has several conclusions. First, personalized social media content has formed echo chambers that are fed by hyper-partisan sources which blurs news and propaganda, Journalism and source vetting is becoming rarer, and local TV news is being remade as they compete with cable outrage channels. As more of our youth engage further in social media, they become more vulnerable to purpose-fed disinformation and manipulation, and less able to distinguish between news and propaganda too. And this generational divide continues to widen as the years pass.

Remember when the saying went, if you aren’t paying for the service, you are the product? That seems so naïve now. Social media is now a tool of geopolitics, and gone are the trust and safety teams that once tried to filter the most egregious posts. And as more websites deploy AI-based chatbots, you don’t even know if you are talking to humans. This just continues a trend about the worsening of internet platforms that Cory Doctorow wrote about almost two years ago (he used a more colorful term).

In her address to the Nobel committee back in 2021, Ressa said, “Without facts, you can’t have truth. Without truth, you can’t have trust. Without trust, we have no shared reality and no democracy.”