Giuseppe Paternò https://gpaterno.com Strategic IT Advisor・Europe & Middle-East Wed, 19 Feb 2020 10:41:00 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Giuseppe Paternò https://gpaterno.com 32 32 504174 USG OpenVPN site-to-site parameters https://gpaterno.com/usg-openvpn-site-to-site/ Fri, 21 Feb 2020 08:30:08 +0000 https://gpaterno.com/?p=2599 You know that I’m using my blog as my personal “friendly reminder” and this article is no difference 🙂 I wanted to connect my brand new Ubiquiti Security Gateway (USG) to a PF Sense and a Linux router using OpenVPN site-to-site. This is no big deal. There are a bunch of articles out on howContinue reading "USG OpenVPN site-to-site parameters"

The post USG OpenVPN site-to-site parameters appeared first on Giuseppe Paternò.

]]>
You know that I’m using my blog as my personal “friendly reminder” and this article is no difference 🙂

I wanted to connect my brand new Ubiquiti Security Gateway (USG) to a PF Sense and a Linux router using OpenVPN site-to-site. This is no big deal. There are a bunch of articles out on how to do that, but many of them involve customisation of the USG configuration file on the controller, i.e. creating a config.gateway.json as described here

However, I wanted a configuration as clean as possibile, as I do have many VPNs to which I’m connecting from home and creating a config file with all these connections will be such a mess. Anyway, I found out that USG OpenVPN encryption/auth/compression parameters are somehow not “standard” as pfsense and standard openvpn would accept. Through debug, I found out that these are the OpenVPN parameters that USG will use when setting a Site-to-Site VPN using OpenVPN.

Protocol: UDP
Encryption: BF-CBC
Auth digest: SHA1
Compression: omit preference (leave openvpn default)

Mind that secret key is the shared secret, without START and END line, all on a single line and without spaces.

The above encryption algorithms are considered very weak and could be easily cracked,. If you use VPN for confidential data, I would highly recommend a customisation of config.gateway.json . My case is easier as I am only using VPN to reach private IPs in another network and have different layers of encryption on top, so kind of easy.

Hope it helps!

The post USG OpenVPN site-to-site parameters appeared first on Giuseppe Paternò.

]]>
2599
I problemi succedono a tutti, storia di SeeWeb https://gpaterno.com/i-problemi-succedono-a-tutti-storia-di-seeweb/ Fri, 24 Jan 2020 12:38:16 +0000 https://gpaterno.com/?p=2588 Scrivo velocemente questo post, anche se sono “impicciato” nei miei soliti viaggi, ma non volevo lasciare cadere nel vuoto questa cosa. Non so se lo sapete, ma SeeWeb -uno dei piu’ grandi housing/hosting provider italiani- ha avuto un disservizio piuttosto “pesante” il 18 Gennaio in uno dei datacenter. La marketing manager di SeeWeb ha spiegatoContinue reading "I problemi succedono a tutti, storia di SeeWeb"

The post I problemi succedono a tutti, storia di SeeWeb appeared first on Giuseppe Paternò.

]]>
Scrivo velocemente questo post, anche se sono “impicciato” nei miei soliti viaggi, ma non volevo lasciare cadere nel vuoto questa cosa.

Non so se lo sapete, ma SeeWeb -uno dei piu’ grandi housing/hosting provider italiani- ha avuto un disservizio piuttosto “pesante” il 18 Gennaio in uno dei datacenter. La marketing manager di SeeWeb ha spiegato in un post l’accaduto:

https://blog.seeweb.it/a-quanti-anni-corrispondono-34-anni-ibm/

Giusto per evitare possibili fraintendimenti, premetto che conosco personalmente sia il proprietario di SeeWeb e alcuni dei suoi tecnici senior, ma non sono in alcun modo coinvolto nella società, ne ho rapporti se non di amicizia e come cliente.

Ho sentito tantissime critiche rivolte a SeeWeb ed alcuni dei suoi tecnici in modo personale (o peggio attacchi), sia sui social networks che in varie discussioni.

Come sapete, ho contribuito a molti progetti di grosse dimensioni, fino ad arrivare a gestire centinaia di migliaia di sistemi, e vi posso assicurare che shit happens, ovvero i casini accadono. Punto.

A tutte queste persone che hanno criticato vorrei far gentilmente notare che e’ facile puntare il dito verso qualcun altro senza non aver mai “rischiato” in prima persona, ne aver gestito un servizio, soprattutto a gente che al massimo avrà gestito qualche decina di server nella sua vita (se va bene).

L’unica cosa che posso imputare a SeeWeb e’ stata l’ingenuita’ di aver creduto ad un vendor. E’ una classica cosa che vedo fare, ovvero “tanto mi supporterà qualcuno”. E infatti IBM li ha supportati e ne sono venuti fuori, ma una SAN e’ comunque un single point of failure sullo storage. Ridondato quanto vuoi, se vanno giu’ entrambe le controller, ti ritrovi in questi pasticci. E vi assicuro che non sono gli unici e non solo con IBM.

Quello su cui invece vorrei far riflettere sono due cose, ovvero i vendor e i clienti.

Da una parte i vendor sono sempre piu’ legati al mercato e poco al prodotto. Una volta aver comprato IBM (o HP, Dell, Hitachi,….) significava comprare della qualità. Per chi lavora in un vendor non e’ una sorpresa, purtroppo si sa che si sono dovuti piegare alle dinamiche del mercato, ovvero non e’ il prodotto o il cliente che comanda, e’ Wall Street. Si aspettano ritorni ogni quarter o fiscal year, quindi c’e’ la corsa a “tirare fuori” sul mercato nuovi prodotti/servizi, a scapito di capire se questi prodotti sono veramente pronti per essere immessi sul mercato. Complice il fatto che molte cose siano fatte in software, spesso ho sentito dire nei vendor “lo sistemiamo dopo con un update” o con una fix hardware after market. E succedono danni come quelli successi a SeeWeb (disclaimer: non ho i dati tecnici e quindi non posso discutere il caso). Spesso quindi i problemi vengono risolti dai clienti stessi e dai ragazzi del supporto del vendor che solitamente sono volenterosi, insieme al supporto L3 (chi scrive il codice), che sempre più soffrono questa pressione.

Dall’altra parte vorrei fare riflettere i clienti. Esiste il tipo del cliente che non e’ del mestiere, posso capire che probabilmente ha comprato e non aveva esattamente gli strumenti per decidere. Questo cliente e’ nel caso che anche se va giu’ qualcosa magari non ti importa più di tanto, oppure se e’ pensa che sia critico per proprio business consiglio di affidarvi ad un vero consulente (e non quelli che fanno le “supercazzole”). Il caso diverso e’ invece quel tipo di cliente -o peggio al consulente del cliente- che ha delegato troppo all’infrastruttura e a terze parti, scordandosi che i problemi possono capitare a tutti, anche ad Amazon o Microsoft, oppure per far contento il cliente per “risparmiare” perche’ non c’e’ budget. Ultimo caso, il cliente ha una infrastruttura non critica per il proprio business e mette dove costa di meno, e anche se va giu’ ”sticazzi”.

Qualsiasi sia il caso, e’ sempre bene tenere in tasca un “Piano B”, che sia un piano di Business Recovery o Business Continuity dipende dal tipo di business che un’azienda abbia e che servizi sta erogando. Anche se un eventuale down non e’ critico per il proprio business, e’ bene sempre avere una copia off-site vendor-neutral dei propri dati, anche nel caso il service provider abbia gravi inconvenienti (mi ricordo una volta un camion entrato in datacenter con enormi danni). A fronte di qualsiasi danno, temporaneo o prolungato, il cliente ha sempre la possibilità di aspettare o decidere di ripristinare i propri sistemi anche da un altra parte. Se invece e’ uno dei clienti che ha bisogno dei propri sistemi sempre “up”, allora vi consiglio di avere una strategia multi-cloud (anche active-active) con le applicazioni che siano in grado di replicare i dati in maniera istantanea.

Non e’ difficile avere un piano di Business Recovery o Continuity, ne ci sono costi enormi, soprattutto se pensate a quanto vi costerebbe un down. La gestione del rischio e’ una forma mentis che purtroppo pochi hanno, cosa che ahime’ anche il caso di SeeWeb ha dimostrato.

The post I problemi succedono a tutti, storia di SeeWeb appeared first on Giuseppe Paternò.

]]>
2588
A summary of 2019 and why I’m not running for OpenStack board again https://gpaterno.com/a-summary-of-2019-and-why-im-not-running-for-openstack-board-again/ Tue, 31 Dec 2019 20:58:28 +0000 https://gpaterno.com/?p=2567 It’s the beginning of the year… that season time where you can rest a bit and do -as most of you- a balance of what 2019 was to me. I can definitively tell you that 2019 was a year full of changes and I bet that 2020 will be no different 🙂 Last thing first.Continue reading "A summary of 2019 and why I’m not running for OpenStack board again"

The post A summary of 2019 and why I’m not running for OpenStack board again appeared first on Giuseppe Paternò.

]]>
It’s the beginning of the year… that season time where you can rest a bit and do -as most of you- a balance of what 2019 was to me.
I can definitively tell you that 2019 was a year full of changes and I bet that 2020 will be no different 🙂
Last thing first. This year I decided not to run as an individual representative for the OpenStack board of directors. In a previous blog post I spoke about the status of OpenStack as a project I would describe it as “dead man walking”.
OpenStack looked promising, but now Kubernetes was faster and was able to “take over”. Also, many enterprises are shifting to the cloud with a “cloud-first approach”, both as IaaS and for services (ex: SalesForce). That’s why I’ve seen many Kubernetes on cloud rather than on-premises.
This is the reason I joined SUSE and left shortly. You know how much I care about OpenSource and Linux, so I thought it was cool to “complete” my career with all the commercial Linux distributions available. In a few months in SUSE, I learned an invaluable lesson: OpenSource innovation is no longer in the hands of the big “open” vendors like RedHat, SUSE or Canonical, but it stands in the big IaaS/SaaS vendors instead. Look at what Facebook, Google, Amazon and Microsoft itself has contributed to OpenSource in the last years.
While I enjoyed the time spent with the colleagues in SUSE, It’s crystal clear that the market is going away from traditional software vendors and embracing more and more “as a service”.
There’s another lesson I learned, this time about myself. I “played” being an entrepreneur in the past years and it didn’t go exactly as expected. It didn’t go wrong, but I can’t tell you it was a success either. But it definitively was a success as a personal objective as I wasn’t sure I could make it. I learned a lot and understood that
probably I can do a better job managing companies than “alleged” entrepreneurs that manage startups. At the same time, I also understood my limits. I figured out that I’m able to run a company in a whole (sales, marketing, products, laws, …)  and keep it solid. However, I truly believe that being an entrepreneur goes beyond being a good manager and having a good product,  I believe that much is about having great connections and being a good influencer.
Will I try again be an entrepreneur or challenge myself for a C-level position in the future? Who knows, but meanwhile, I decided is time to move on and refocus  on what I’m doing best, i.e. acting as a trusted advisor for the big companies around Europe. I believe I’ve got the right balance between deep technical knowledge on many subjects and the communication skills needed to interact with upper management.
There’s a specific need in the market now. In a cloud-first approach, the selection and integration part between multiple on-line services and on-premises will play a key role. I believe better management of the IT budget -especially on the cloud- will be an hot topic future, and automation will definitively have a role in all of that. We will face a consumerization” of IT, especially in the user side,  and multi-cloud on the services side. Security will also play a key role, where the “zero-trust” approach and cloud identity management will slowly replace the traditional firewall and VPN.
London is in my heart and probably the closest of what I can consider “home”, but being a digital nomad will definitely be still a thing for me in 2020. So, I’ll see you around in Europe 🙂
Happy and prosperous 2020 for all of you.

The post A summary of 2019 and why I’m not running for OpenStack board again appeared first on Giuseppe Paternò.

]]>
2567
Docker Containers & Security https://gpaterno.com/docker-containers-security/ Thu, 04 Apr 2019 08:00:38 +0000 https://gpaterno.com/?p=2498 Background For those of you not acquainted with the latest trends in computer development, a docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application. This means a Docker contains code, runtime, system tools, system libraries, and settings. In fact, a docker is a self-contained imageContinue reading "Docker Containers & Security"

The post Docker Containers & Security appeared first on Giuseppe Paternò.

]]>
Background

For those of you not acquainted with the latest trends in computer development, a docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application. This means a Docker contains code, runtime, system tools, system libraries, and settings.

In fact, a docker is a self-contained image that is easy to distribute. It’s also easy to move from development to test and to production. The movement process was key to the successful adoption of docker containers among developers.

With dockers, distributing applications is very easy and everyone can create and publish their own containers. There is even an official marketplace for docker containers (https://hub.docker.com/), which is managed by the open source community. Indeed, there are a vast number of official and unofficial ready for use docker containers, even for commonplace applications like databases and web servers.

Docker Security

A lot of security concerns have been raised about containers. Many of the security concerns are valid, but many misunderstandings occur because many people confuse containers and virtualization, thinking that they’re the same thing. However, virtualization involves the capability of segregating memory and CPU, whereas a docker shares the same resources.

Last month a security researcher found a way to “escape” the jail that docker creates and replace arbitrary programs in the host system. A malicious program or docker image can exploit a bug in the docker runtime (runC) to gain root privileges on the host running the container. This then allows ill-intentioned players unlimited access to the server as well as any other containers on that server.

The risks are quite clear.

What can I do? Be Very Careful.

Developers and Operations have to select docker images with extreme care. The way to exploit the aforementioned bug is through malicious and hidden programs that -before launching the real program- execute the exploitation sequence and inject an evil binary.

Most of the time images are built in-house for custom-made applications, but sometimes 3rd party images are uses for common tasks such as databases, caching (ex: Redis) and front-end (ex: haproxy). DevOps should avoid images from unknown/untrusted sources.

If you can, create your own images, even for common applications, starting from well-known and trusted sources, such as the operating system itself or from the application vendor.

At present, I am helping many of my customers to introduce secure building pipelines to their architecture. By embracing DevSecOps, and thus driving development and automation in a different way, you can improve the overall security of in-house applications. At the same time, you can ensure the code provided by suppliers is up to scratch. In this way, the use of dockers can indirectly help maintain code quality.

Use dockers by all means, but implement them carefully to avoid potential security pitfalls resulting from poor coding practice.

If you require specialist advice on building secure pipelines or security with docker, get in touch with me.

The post Docker Containers & Security appeared first on Giuseppe Paternò.

]]>
2498
Gartner Conference Nov 2018 https://gpaterno.com/gartner-conference-nov-2018/ Tue, 06 Nov 2018 10:06:41 +0000 http://gpaterno.com/?p=2253 I am honoured to host a keynote speech at the Gartner Cloud Conference in London on 26-27 November at the O2 Theatre with a colleague. I will also be interviewed by analysist on Edge & IoT as a well recognized Cloud specialist. If you are at the conference, don’t miss the chance to have aContinue reading "Gartner Conference Nov 2018"

The post Gartner Conference Nov 2018 appeared first on Giuseppe Paternò.

]]>
2000px-gartner_logo-svgI am honoured to host a keynote speech at the Gartner Cloud Conference in London on 26-27 November at the O2 Theatre with a colleague. I will also be interviewed by analysist on Edge & IoT as a well recognized Cloud specialist.

If you are at the conference, don’t miss the chance to have a private chat with me or feel free come and introduce yourself: I will be at the SUSE Booth.

 

The post Gartner Conference Nov 2018 appeared first on Giuseppe Paternò.

]]>
2253
Is OpenStack still a thing? https://gpaterno.com/is-openstack-still-a-thing/ https://gpaterno.com/is-openstack-still-a-thing/#comments Mon, 17 Sep 2018 08:00:23 +0000 http://gpaterno.com/?p=2240 You know how much I care about OpenStack and how deep I feel involved in its community. A recent experience in the Netherlands at a premier customer made me think about OpenStack as a whole. As I said at a conference last year, probably the most involved consultant in Europe, and yet I’ve seen moreContinue reading "Is OpenStack still a thing?"

The post Is OpenStack still a thing? appeared first on Giuseppe Paternò.

]]>
You know how much I care about OpenStack and how deep I feel involved in its community. A recent experience in the Netherlands at a premier customer made me think about OpenStack as a whole.

1024px-The_OpenStack_logo.svgAs I said at a conference last year, probably the most involved consultant in Europe, and yet I’ve seen more failures than success. Most failures were due to the lack of expertise in OpenStack and deep knowledge of Linux, protocols and OpenSource in general. I initially thought that the skills I have are quite common and that there’s plenty of people capable of running an infrastructure, but apparently, I was wrong.

Even big brands have usually 1-3 great engineers and the others are on the average. This is not really a bad thing, but you have to have great skills to manage OpenStack and most of the time management can’t rely on a bunch of guys for their business. Many decided either to go to public clouds and some went back to VMWare because skills are easier to find.  To be honest, as an entrepreneur, I can’t blame them.

Public clouds (AWS, Azure, Google, …) are easy to embrace and you don’t have to maintain hardware, storage, network and is very attractive to those customers where IT is not their core business. Public clouds might seem costly at the beginning, but if you look at the real TCO (including labour cost), then you find out is not that much.
And if you are concerned about your privacy, a good VMWare cluster is enough for most of the businesses.

Kubernetes quickly ramped up into developers radar in the last year. It’s “cool” and containers are great ways for developers to distribute their applications. At the end of the day, companies need to run their applications to make money or support their business. How they do it, they don’t really care.

In my humble opinion, Kubernetes is not mature yet, especially in the networking and storage, and still lacking multitenancy. But is slowly getting there. Kubernetes is not that simple to manage, but it’s way less complex than OpenStack … and you don’t depend on MySQL or RabbitMQ to operate (which is a real pain). So what’s the need for OpenStack then?

This is the question I’m asking myself. Probably the number of use cases for OpenStack is quite small now, mostly related to telco operators and NFV.

The only thing that Kubernetes is not capable of is Microsoft Windows app, but Microsoft has shown interest in porting its apps to Linux (see SQL Server for example), not mentioning they are actively contributing to Helm.

While I still love OpenStack, we need to face the evidence that the interest in OpenStack is slowing fading away. However, its legacy has been invaluable to me and the community as well. The “Software-Defined” revolution that OpenStack brought, as well as the mindset around automation is the base for the future steps of IT.

The post Is OpenStack still a thing? appeared first on Giuseppe Paternò.

]]>
https://gpaterno.com/is-openstack-still-a-thing/feed/ 1 2240
An era has ended: SecurePass shutdown https://gpaterno.com/an-era-has-ended-securepass-shutdown/ Tue, 31 Jul 2018 10:58:11 +0000 http://gpaterno.com/?p=2227 GARL announced that SecurePass would have ceased its official activities in August 2017. As of today, I shut down all the virtual machines of SecurePass. I am a bit sad, but there are choices you have to make and sentiment sometimes is far away from the business. This definitively marks the end of an era, but aContinue reading "An era has ended: SecurePass shutdown"

The post An era has ended: SecurePass shutdown appeared first on Giuseppe Paternò.

]]>
shutdownGARL announced that SecurePass would have ceased its official activities in August 2017. As of today, I shut down all the virtual machines of SecurePass.

I am a bit sad, but there are choices you have to make and sentiment sometimes is far away from the business.

This definitively marks the end of an era, but a new one is showing up.

The post An era has ended: SecurePass shutdown appeared first on Giuseppe Paternò.

]]>
2227
Project “simplification” for 2018 https://gpaterno.com/project-simplification/ Fri, 29 Jun 2018 08:00:35 +0000 http://gpaterno.com/?p=2204 Since the beginning of 2018, I started an “internal” project whose ultimate goal was to simplify my life. 2017 was definitively a stunning year, with a lot of great projects and great results as well. I believe that will be difficult to achieve the same ever again. With great results, however, comes also great sacrifices: itContinue reading "Project “simplification” for 2018"

The post Project “simplification” for 2018 appeared first on Giuseppe Paternò.

]]>
shutterstock_64028797-634x0-c-defaultSince the beginning of 2018, I started an “internal” project whose ultimate goal was to simplify my life. 2017 was definitively a stunning year, with a lot of great projects and great results as well. I believe that will be difficult to achieve the same ever again. With great results, however, comes also great sacrifices: it was all about work and there was little space for my own life. “All work and no play makes Jack a dull boy”, the proverb says, so I believe I deserve a little relief from the big pressure.

My new year resolution was to simplify my life and have a better work/life balance. This “simple” resolution turned out to be more complex and harder than I thought. Since January, I worked really hard to reduce the number of hassles as much as I can. This is the main reason why you haven’t seen me around and I wasn’t very often involved with social media, events, etc…

At the end of June, I can say I’m on the right track, but a lot has yet to come. Standby for some great announcements 🙂

The post Project “simplification” for 2018 appeared first on Giuseppe Paternò.

]]>
2204
Alicloud & RedHat Linux 7.4 BYOS https://gpaterno.com/alicloud-redhat-linux-7-4-byos/ Wed, 04 Apr 2018 12:00:22 +0000 http://gpaterno.com/?p=2173 Alibaba Cloud (Alicloud or Aliyun) is a promising Chinese cloud provider that is becoming popular in the Asia-Pacific region. If you want to release services in China and be able to comply with Chinese privacy law, all your data need to stay in China. For this reason, Alicloud can be handy to start your journeyContinue reading "Alicloud & RedHat Linux 7.4 BYOS"

The post Alicloud & RedHat Linux 7.4 BYOS appeared first on Giuseppe Paternò.

]]>
alibaba-cloud-logo

Alibaba Cloud (Alicloud or Aliyun) is a promising Chinese cloud provider that is becoming popular in the Asia-Pacific region. If you want to release services in China and be able to comply with Chinese privacy law, all your data need to stay in China. For this reason, Alicloud can be handy to start your journey in the Asian country.

Most businesses want to have the same certified workloads in China as well, and those are mostly based on RedHat Enterprise Linux (RHEL). Alicloud is a RedHat Certified Cloud Provider and offers RHEL images in their marketplace, but these images include a RedHat subscription. What if you have an Enterprise agreement and you want to use a Bring Your Own Subscription (BYOS) method?

Here are some handy tricks to bring RHEL 7.4 BYOS into Alicloud and start serving your customers in China.

Alicloud supports importing images in RAW and VHD format, which help us a lot. If you have an active RedHat subscription, you should download the RHEL 7.4 KVM guest image (see image below). This image is compatible with the Alicloud virtualization system; Alicloud is also compatible with cloud-init to customize the virtual machine at boot time. The direct link to the download page is here: https://access.redhat.com/downloads/content/69/ver=/rhel—7/7.4/x86_64/product-software

rhel guest.PNG

The next step would be converting the QCOW2 image into a RAW format. However, the conversion will expand the 500MB QCOW2 image into a 10GB RAW format. Uploading such a big file would be problematic if you do not sit in China and you have to traverse the Great Firewall of China.

As such, we will upload the QCOW2 image into Alicloud  Object Storage Service (OSS) and convert it using a temporary virtual machine in China. Create a bucket through the console and upload the image. Shall you need a GUI to perform the upload, an official GUI client named “OSS Browser” is available here: https://github.com/aliyun/oss-browser/blob/master/all-releases.md

I strongly recommend downloading also ossutil64, a CLI based tool for OSS, to be able to upload your image from the temporary Linux instance. The tool is available here: https://www.alibabacloud.com/help/doc-detail/50452.htm

Create a small Linux instance with the distro of your choice (I recommend CentOS) in your Chinese region (in my case Beijing), but ensure you have sufficient disk space. Once the instance is reachable, login and download the QCOW2 from the bucket using curl and the object URL. Convert it using qemu-img tool:

qcow-img -f qcow2 -O raw rhel-server-7.4-x86_64-kvm.qcow2 rhel-server-7.4-x86_64-kvm.img

Once converted, use the ossutil64 to upload the image to your previously created bucket.

Object Storage Service 1.PNG

If you click on the file, you can get its public URL in the preview. Copy the file URL as we will feed it into the image importer,

Object Storage Service detail.PNG

Go back to the Elastic Compute Service (ECS), select Image on the menu on the left and start the import through the “Import Image” functionality. In the OSS Object Address, insert the URL as copied before. Use Linux as operating system and RedHat as system platform. Mind to specify RAW as image format.

import image1.PNG

import image 2.PNG

The Alicloud image service will (slowly) import the image. If everything is successful, you should see an image similar to the one below:

image2.PNG

You can start a virtual machine with your newly created image and register your RedHat subscription with subscription-manager 🙂

The post Alicloud & RedHat Linux 7.4 BYOS appeared first on Giuseppe Paternò.

]]>
2173
Outside of “The Net” https://gpaterno.com/outside-of-the-net/ Thu, 22 Mar 2018 08:30:58 +0000 http://gpaterno.com/?p=2165 I’d wish to share with you something that recently happened to a friend of mine couple of days ago. He runs a small cloud provider and acts as an outsourcer for his selected customers. A very big firm in his country decided to move his brand-new website to one of his datacenters. He runs twoContinue reading "Outside of “The Net”"

The post Outside of “The Net” appeared first on Giuseppe Paternò.

]]>
cord-cutting-hype-100648271-carousel-idge

I’d wish to share with you something that recently happened to a friend of mine couple of days ago. He runs a small cloud provider and acts as an outsourcer for his selected customers. A very big firm in his country decided to move his brand-new website to one of his datacenters.

He runs two datacenters for disaster-recovery and business continuity.  Each one of the datacenters has its own provider independent IPs, different ASNs and different upstream providers.

What happened is that, once he moved the new website, Google has delisted the website from its search engine.  Absolutely no evidence of this company when searching excepts for its famous products on the Amazon marketplace. No need to say that the marketing of the customer and the developers were blaming my friend.

After an initial investigation, Google failed to retrieve the robots.txt file that is needed to index the website, so it decided to delist its website. Funny enough, other search engines (es: Bing and Qwant) were able to retrieve the same file. On access logs and tcpdump, no sign of the Google crawler.

During a test, he was able to “restore” the situation by moving the complex website with its e-commerce platform to the other datacenter. A deeper investigation revealed that -for some unknown reasons- Google seemed to have blocked the ASN IPs, while other search engines and the rest of the world was able to access the website. While contacting the Google NOC, they said that Google search engine and webmaster tools are unsupported,  so basically my friend was on his own. For the unknown reason, after a couple of weeks, the ASN IP of the datacenter were reachable again.

This reminds me of my previous posts in which I told about how the Internet has been designed to be as much as possible independent from a central point, while the information is now more and more centralized to few companies.  Of course, there is no malicious willing from Google to block my friends IPs, but it turned out that one of these companies have the potential power to decide if you can run your business or not.

The same thing could potentially happen to a public cloud provider: what if Amazon decides to shut down your machines (and it has the right to do so!)?

I’m not against any cloud provider and we need to thank AWS and Azure for bringing such an inspiring innovation to the world of IT. But, as I stated in previous posts, we need to be ready to bring back our business on-premise if forced to do so.

Just a couple of hints:

  1. Create your local micro-cloud on-premise, say with OpenStack and Kubernetes, so that you can start and scale up quickly
  2. Use open data and open standards and avoid any layered product that is offered by the cloud provider, it will lock you in.
  3. Automate deployments as much as you can, so that is reproducible and can be run on-premise

The idea I’m currently advocating is to apply the Raiffeisen model to IT to foster a complementary alternative to public clouds and big outsourcers so that heterogeneous enterprises in a local territory can team up to create a small micro-cloud and save.

The post Outside of “The Net” appeared first on Giuseppe Paternò.

]]>
2165