Lavoro con aziende dalla tenera età di 16 anni… in questi pochi 35 anni ho visto molte cose succedere ai miei clienti: aziende apparentemente prominenti e arroganti “scomparire” come bolle di sapone, altre sopravvivere nonostante forti cambiamenti nei mercati, e altre ancora venire acquistate e cambiare completamente rotta, non necessariamente verso il successo.
Ricordo una massima di Helmut Kohl: “non si cambia direzione quando il treno è partito”. Questa massima ha un doppio taglio poiché quando il “treno è partito” si deve essere certi della destinazione, ma vi lascio immaginare le conseguenze di un’incertezza. Eppure l’incertezza è sicuramente una costante. Occorre una vera visione, non come buzz-word degli anni Novanta, dico una “visione” sul serio!


Vado al punto: perché alcune aziende si sono schiantate e altre sono invece sopravvissute? Generalmente mi hanno risposto con un “semplice, basta essere flessibili”. La flessibilità è un valore, ma bisogna vedere in che modo sia giusto applicarla. Così ho trovato aziende (molte) dove il concetto di flessibilità era indissolubilmente legato al ferreo guanto d’acciaio della gestione padronale o dei soci, con un organigramma “a stella” dove tutto ruotava intorno al “padrone”, centrale, perché l’azienda è “sua” (questo non lo si discute). Questo vizio di visione è ricorrente nel mercato italiano e porta a venir meno di tutti i benefici della condizione di “azienda” delle imprese.
L’azienda è un sistema, un organismo vivente, ricco di debolezze tipiche del genere umano, ma anche dei suoi pregi. Le debolezze sono ovviabili attraverso l’#organizzazione ed il #criterio. Non credo di essere l’unico a considerare esclusivamente questi due concetti fondamentali. Portarli avanti in modo pratico, anche se non semplice ma nemmeno impossibile, rende veramente flessibile e longeva un’azienda.


Fatevi una cortesia e chiedetevi quali siano i vostri criteri primari, ma come azienda e non come persona, quindi riduceteli all’osso e metteteli in atto attraverso un’organizzazione della vostra azienda tramite un organigramma e una revisione periodica (suggerisco trimestrale) che miri alla correzione di scostamenti dalla rotta fissata.


Fatelo e il vostro treno non lo fermerà nessuno.

Fog computing extends the concept of cloud computing to the network edge, making it ideal for internet of things (IoT) and other applications that require real-time interactions.

Fog computing is the concept of a network fabric that stretches from the outer edges of where data is created to where it will eventually be stored, whether that’s in the cloud or in a customer’s data center.

Fog is another layer of a distributed network environment and is closely associated with cloud computing and the internet of things (IoT). Public infrastructure as a service (IaaS) cloud vendors can be thought of as a high-level, global endpoint for data; the edge of the network is where data from IoT devices is created.

Fog computing is the idea of a distributed network that connects these two environments. “Fog provides the missing link for what data needs to be pushed to the cloud, and what can be analyzed locally, at the edge,” explains Mung Chiang, dean of Purdue University’s College of Engineering and one of the nation’s top researchers on fog and edge computing.

According to the OpenFog Consortium, a group of vendors and research organizations advocating for the advancement of standards in this technology, fog computing is “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.”

Benefits of fog computing

Fundamentally, the development of fog computing frameworks gives organizations more choices for processing data wherever it is most appropriate to do so. For some applications, data may need to be processed as quickly as possible – for example, in a manufacturing use case where connected machines need to be able to respond to an incident as soon as possible.

Fog computing can create low-latency network connections between devices and analytics endpoints. This architecture in turn reduces the amount of bandwidth needed compared to if that data had to be sent all the way back to a data center or cloud for processing. It can also be used in scenarios where there is no bandwidth connection to send data, so it must be processed close to where it is created. As an added benefit, users can place security features in a fog network, from segmented network traffic to virtual firewalls to protect it.

Applications of fog computing

Fog computing is the nascent stages of being rolled out in formal deployments, but there are a variety of use cases that have been identified as potential ideal scenarios for fog computing.

Connected Cars: The advent of semi-autonomous and self-driving cars will only increase the already large amount of data vehicles create. Having cars operate independently requires a capability to locally analyze certain data in real-time, such as surroundings, driving conditions and directions. Other data may need to be sent back to a manufacturer to help improve vehicle maintenance or track vehicle usage. A fog computing environment would enable communications for all of these data sources both at the edge (in the car), and to its end point (the manufacturer).

Smart cities and smart grids Like connected cars, utility systems are increasingly using real-time data to more efficiently run systems. Sometimes this data is in remote areas, so processing close to where its created is essential. Other times the data needs to be aggregated from a large number of sensors. Fog computing architectures could be devised to solve both of these issues.

Real-time analytics A host of use cases call for real-time analytics. From manufacturing systems that need to be able to react to events as they happen, to financial institutions that use real-time data to inform trading decisions or monitor for fraud. Fog computing deployments can help facilitate the transfer of data between where its created and a variety of places where it needs to go.

Fog computing and 5G mobile computing

Some experts believe the expected roll out of 5G mobile connections in 2018 and beyond could create more opportunity for fog computing. “5G technology in some cases requires very dense antenna deployments,” explains Andrew Duggan, senior vice president of technology planning and network architecture at CenturyLink. In some circumstances antennas need to be less than 20 kilometers from one another. In a use case like this, a fog computing architecture could be created among these stations that includes a centralized controller that manages applications running on this 5G network, and handles connections to back-end data centers or clouds.

How does fog computing work?

A fog computing fabric can have a variety of components and functions. It could include fog computing gateways that accept data IoT devices have collected. It could include a variety of wired and wireless granular collection endpoints, including ruggedized routers and switching equipment. Other aspects could include customer premise equipment (CPE) and gateways to access edge nodes. Higher up the stack fog computing architectures would also touch core networks and routers and eventually global cloud services and servers.

The OpenFog Consortium, the group developing reference architectures, has outlined three goals for developing a fog framework. Fog environments should be horizontally scalable, meaning it will support multiple industry vertical use cases; be able to work across the cloud to things continuum; and be a system-level technology, that extends from things, over network edges, through to the cloud and across various network protocols. (See video below for more on fog computing from the OpenFog Consortium.)

Are fog computing and edge computing the same thing?

Helder Antunes, senior director of corporate strategic innovation at Cisco and a member of the OpenFog Consortium, says that edge computing is a component, or a subset of fog computing. Think of fog computing as the way data is processed from where it is created to where it will be stored. Edge computing refers just to data being processed close to where it is created. Fog computing encapsulates not just that edge processing, but also the network connections needed to bring that data from the edge to its end point.

[ Related (NetworkWorld): What is edge computing and how it’s changing the network ]

Based on actual users’ experience with IoT platforms, here are the leading features and functionalities potential users should be looking for.

Article published on NetworkWorld by , Contributor, Jan 16, 2018

As an IoT platform and middleware analyst, I am asked constantly about the benefits of IoT platforms and “what makes a great IoT platform.” In response, I often ask these curious inquirers if they’ve ever used IoT platforms themselves. Walking on the edge is exhilarating, but having hands-on insights, data and expertise on how to survive the journey is even better.

What do users actually experience when they use IoT edge platforms?

IoT edge computing is a technology architecture that brings certain computational and analytics capabilities near the point of data generation. IoT edge platforms provide the management capabilities required to deliver data from IoT devices to applications while ensuring that devices are properly managed over their lifetimes. Enterprises use edge platforms for factory automation, warehousing/logistics, connected retail, connected mining and many other solutions. With IoT platform revenue slated to grow to USD63.4 billion by 2026, IoT edge is one of the most highly relied upon enterprise IoT platform approaches.

Enterprises spend a tremendous amount of time completing edge-related IoT platform activities. According to hands-on tests of IoT platforms in MachNation’s IoT Test Environment (MIT-E), the majority of an enterprise user’s edge-related time is spent creating visualizations to gain insight from IoT data. 35% of a user’s time is spent creating dashboards with filtered alerts. And a combined 16% of a user’s time is spent viewing sensor data for an individual device (8%) or a group of devices (8%). Data from an IoT platform are critically important, so the ability to assemble dashboard sensor data and alerts are key – expect to spend a lot of time doing it.

Since the edge is critical for enterprises deploying IoT solutions, we’ve identified the top five user requirements of IoT edge platforms, based on IoT platform users’ experiences with these platforms.

1. Pick a platform with extensive protocol support for data ingestion

To seamlessly bring data from devices into the edge platform, enterprises should choose leading IoT platforms that support an extensive mix of protocols for data ingestion. The list of protocols for industrial-minded edge platforms generally includes brownfield deployment staples such as OPC-UA, BACNET and MODBUS as well as more current ones such as ZeroMQ, Zigbee, BLE and Thread. Equally as important, the platform must be modular in its support for protocols, allowing customization of existing and development of new means of asset communications.

2. Ensure the platform has robust capability for offline functionality

To ensure that the edge platform works when connectivity is down or limited, enterprises should choose leading IoT edge platforms that provide capabilities in four functional areas. First, edge systems need to offer data normalization to successfully clean noisy sensor data. Second, these systems must offer storage to support intermittent, unreliable or limited connectivity between the edge and the cloud. Third, an edge system needs a flexible event processing engine at the edge making it possible to generate insight from machine data when connectivity is constrained. Fourth, an IoT edge-enabled platform should integrate with systems including ERP, MES, inventory management and supply chain management to help ensure business continuity and access to real-time machine data.

3. Make sure the platform provides cloud-based orchestration to support device lifecycle management

To make sure that the edge platform offers highly secure device management, enterprises should select IoT platforms that offer cloud-based orchestration for provisioning, monitoring and updating of connected assets. Leading IoT platforms provide factory provisioning capabilities for IoT devices. These API-based interactions allow a device to be preloaded with certificates, keys, edge applications and an initial configuration before it is shipped to the customer. In addition, platforms should monitor the device using a stream of machine and operational data that can be selectively synced with cloud instances. Finally, an IoT platform should push updates over-the-air to edge applications, the platform itself, gateway OSs, device drivers and devices connected to a gateway.

4. The platform needs a hardware-agnostic scalable architecture

Since there are tens of thousands of device types in the world, enterprises should select IoT platforms that are capable of running on a wide range of gateways and specialized devices. And these platforms should employ the same software stack at the edge and in the cloud allowing a seamless allocation of resources. Platforms should support IoT hardware powered by chips that use ARM-, x86-, and MIPS-based architectures. Using containerization technologies and native cross-compilation, the platforms offer a hardware-agnostic approach that makes it possible to deploy the same set of functionalities across a varied set of IoT hardware without modifications.

5. Comprehensive analytics and visualization tools make a big difference

As we’ve already discussed enterprises should choose IoT platforms that offer out-of-the-box capabilities to aggregate data, run common statistical analyses and visualize data. These platforms should make it easy to integrate leading analytics toolsets and use them to supplement or replace built-in functionality. Different IoT platform users will require different analyses and visualization capabilities. For example, a plant manager and a machine worker will want to access interactive dashboards that deliver useful information and relevant controls for each of their respective roles. Having flexibility in analytics and visualization capabilities will be essential for enterprises as they develop IoT solutions for their multiple business units and operations teams.

Enterprises worldwide are using IoT to increase security, improve productivity, provide higher levels of service and reduce maintenance costs. As they seek to adopt IoT solutions to improve their critical business processes, they should conduct hands-on usability tests to understand edge platform capabilities. Keep watching as more and more enterprises start walking on the edge.

Article written by Brian Krebs, published on KrebsOnSecurity the 18th Jan. 2018

Most readers here have likely heard or read various prognostications about the impending doom from the proliferation of poorly-secured “Internet of Things” or IoT devices. Loosely defined as any gadget or gizmo that connects to the Internet but which most consumers probably wouldn’t begin to know how to secure, IoT encompasses everything from security cameras, routers and digital video recorders to printers, wearable devices and “smart” lightbulbs.

Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.

-Rule #1: Avoid connecting your devices directly to the Internet — either without a firewall or in front it, by poking holes in your firewall so you can access them remotely. Putting your devices in front of your firewall is generally a bad idea because many IoT products were simply not designed with security in mind and making these things accessible over the public Internet could invite attackers into your network. If you have a router, chances are it also comes with a built-in firewall. Keep your IoT devices behind the firewall as best you can.

-Rule #2: If you can, change the thing’s default credentials to a complex password that only you will know and can remember. And if you do happen to forget the password, it’s not the end of the world: Most devices have a recessed reset switch that can be used to restore to the thing to its factory-default settings (and credentials). Here’s some advice on picking better ones.

I say “if you can,” at the beginning of Rule #2 because very often IoT devices — particularly security cameras and DVRs — are so poorly designed from a security perspective that even changing the default password to the thing’s built-in Web interface does nothing to prevent the things from being reachable and vulnerable once connected to the Internet.

Also, many of these devices are found to have hidden, undocumented “backdoor” accounts that attackers can use to remotely control the devices. That’s why Rule #1 is so important.

-Rule #3: Update the firmware. Hardware vendors sometimes make available security updates for the software that powers their consumer devices (known as “firmware). It’s a good idea to visit the vendor’s Web site and check for any firmware updates before putting your IoT things to use, and to check back periodically for any new updates.

-Rule #4: Check the defaults, and make sure features you may not want or need like UPnP (Universal Plug and Play — which can easily poke holes in your firewall without you knowing it) — are disabled.

Want to know if something has poked a hole in your router’s firewall? Censys has a decent scanner that may give you clues about any cracks in your firewall. Browse to whatismyipaddress.com, then cut and paste the resulting address into the text box at Censys.io, select “IPv4 hosts” from the drop-down menu, and hit “search.”

If that sounds too complicated (or if your ISP’s addresses are on Censys’s blacklist) check out Steve Gibson‘s Shield’s Up page, which features a point-and-click tool that can give you information about which network doorways or “ports” may be open or exposed on your network. A quick Internet search on exposed port number(s) can often yield useful results indicating which of your devices may have poked a hole.

If you run antivirus software on your computer, consider upgrading to a “network security” or “Internet security” version of these products, which ship with more full-featured software firewalls that can make it easier to block traffic going into and out of specific ports.

Alternatively, Glasswire is a useful tool that offers a full-featured firewall as well as the ability to tell which of your applications and devices are using the most bandwidth on your network. Glasswire recently came in handy to help me determine which application was using gigabytes worth of bandwidth each day (it turned out to be a version of Amazon Music’s software client that had a glitchy updater).

-Rule #5: Avoid IoT devices that advertise Peer-to-Peer (P2P) capabilities built-in. P2P IoT devices are notoriously difficult to secure, and research has repeatedly shown that they can be reachable even through a firewall remotely over the Internet because they’re configured to continuously find ways to connect to a global, shared network so that people can access them remotely. For examples of this, see previous stories here, including This is Why People Fear the Internet of Things, and Researchers Find Fresh Fodder for IoT Attack Cannons.

-Rule #6: Consider the cost. Bear in mind that when it comes to IoT devices, cheaper usually is not better. There is no direct correlation between price and security, but history has shown the devices that tend to be toward the lower end of the price ranges for their class tend to have the most vulnerabilities and backdoors, with the least amount of vendor upkeep or support.

In the wake of last month’s guilty pleas by several individuals who created Mirai — one of the biggest IoT malware threats ever — the U.S. Justice Department released a series of tips on securing IoT devices.

One final note by the author (Krebs): I realize that the people who probably need to be reading these tips the most likely won’t ever know they need to care enough to act on them. But at least by taking proactive steps, you can reduce the likelihood that your IoT things will contribute to the global IoT security problem.

The blockchain is the technology behind Bitcoin (and other cryptocurrencies) which is currently dominating the headlines, due to its meteoric rise over the past month, and the equally massive plunge it has taken this week. Bitcoin is nothing but volatile.

Blockchain tech, on the other hand, is a transparent, distributed digital ledger, that is inherently secure. It has the promise to revolutionize many diverse sectors, including musical digital rights management, secure digital voting, storage of healthcare records, and digital ‘smart’ legal contracts – to name but a few applications. The blockchain is frequently referred to as a disruptive invention, even compared to the very invention of the internet itself.

While blockchain technology offers many advantages, including a high level of security against fraud, and potentially cost-effective transactions, it may not become a storming success and sweep the world off its feet as soon as you might think. As with most fresh technological innovations, it faces an uphill battle towards adoption.

Here are some of the current obstacles that are ‘blocking the blockchain’, as it were.

1. Energy wastage

Bitcoin and cryptocurrency mining are highly dependent on GPUs and ASIC miners for profitability. Anyone who has built a computer is aware that GPUs require a robust power supply to function, with a greater amount of power on tap being ideal for stability.

Also note that the security of the Bitcoin blockchain is obviously critical, and must mean that any effort to defraud the system isn’t worth the while, as that effort would be better directed at simply mining the next Bitcoin, as this would be more profitable.

Now, as of December 6, 2017, the energy consumption of Bitcoin mining reached 32.36 Terawatt-hours per year, which is a ridiculous amount of power, and is actually higher than the energy usage of 159 individual countries according to one estimate.

With all this in mind, maintaining data in a blockchain – and keeping it intact and free of fraud – is an inherently energy-inefficient process. In the current era of 6W processors for laptops, deep sleep states for electronics, and solar panels, all aimed at greater energy efficiency and independence, the high energy consumption of blockchain technology and virtual currency mining flies in the face of this.

2. Data woes

Generally speaking, the internet is fairly efficient when it comes to the transmission of data. The user requests information, and the server transmits back the piece of data requested with only a small amount of additional data required to get it there.

However, the blockchain, in order for it to be preserved, as well as to prevent hacking, needs multiple copies distributed across many nodes. And the blockchain then requires a large amount of storage – for example, Bitcoin’s blockchain was nearly 150GB in size as of last month, and it’s getting bigger all the time.

Furthermore, transmitting so much data for the blockchain each time also consumes additional electricity, making the blockchain quite inefficient. In a time where efforts are being made to compress video further to decrease the data required for a download, blockchain’s bulkiness makes little sense.

3. Time for adoption

While blockchain technology may ultimately work for some sectors, its wider adoption may be a sluggish process, particularly when it comes to industries which are notably set in their ways.

Some sectors – like legal and healthcare – have only just started to move away from paper records, and in some cases still maintain them as backups. They are unlikely to jump to a cutting-edge solution such as the blockchain overnight.

The technology will need to clearly demonstrate advantages and gain a proven track record before this happens, and that could potentially take decades. After all, remember that stock markets held onto their old ticker tapes in the 1970s, after using them from 1867, and the last telegram in the world was sent in 2013.

4. Centralized may be a good thing

Bitcoin was developed to be a decentralized cryptocurrency that allows for peer-to-peer transactions. However, this can be a disadvantage, such as when governments cannot track funds easily, and risk losing on the tax side of the equation (which may, potentially, mean that the average taxpayer ends up paying more). It also makes things more challenging when users experience fraud, and recovering funds can be difficult.

5. Slow transactions with cryptocurrency

Some tout Bitcoin as the future of currency, and the promise is that peer-to-peer transactions can happen in a fast and cost-efficient manner that can compete with traditional credit cards.

However, Bitcoin transactions are painfully slow, with transactions occurring at the glacial pace (at least in the world of finance) of multiple hours for each transaction in some cases. One of the current reasons for this bottleneck is that each transaction has to be confirmed by six miners.

Obviously enough, this process needs to be sped up significantly for Bitcoin to realistically become a true rival to established methods of buying goods.

6. Private problems

Many of the advantages of the blockchain come from its public use – anyone can download the entire blockchain, and mine for additional currency, which democratizes this process.

It also keeps it immune from hackers – with such a large legitimate group dedicated to mining, any fraud attempts would effectively have to ‘out-mine’ the miners, a process that would take a colossal amount of computing power for a popular cryptocurrency. This type of blockchain is known as a public blockchain.

So what about a private blockchain? Well, the same blockchain tech can be applied as a storage medium, and if a company doesn’t want anyone to download the entire blockchain – and no one is going to mine it – then this is kept as a private blockchain. It is also held in a handful of private nodes, rather than distributed across thousands of public nodes as is the case for a public blockchain.

With a private blockchain, while it is more carefully controlled, and far less likely to be hijacked or hacked, it also flies in the face of the whole fundamental idea of this technology – losing the advantages of transparency and wider distribution that make the blockchain tech intriguing in the first place.


Article published on TechRadar