OpenSees – A simple docker image

It’s great to see that even a small work done for academia can make a positive impact, with Carmine Galasso of EPICentre UCL.
Last year, I’ve created a Docker image for OpenSees, a software used for earthquake modeling. This allowed a deployment in a Kubernetes cluster, it’s a simple way to automate scaling the computational effort.
In that case, it reduced the time for the analysis from months to a couple of days.

Completed the work, I’ve published the image freely online for the community to use.

Since then, it has been downloaded more than 20K times and it has contributed to the work of researchers all over the world! ( )

Screen Shot 2018-04-08 at 13.11.49


HAproxy and DNS in the cloud

HAproxy is a great tool that we all know and love.(Well, in case you don’t…go here!).
It is, however, surprising how, even basic features, are not default.
In particular, today I stumbled upon the configuration for HAproxy for dynamic DNS resolution.
In most cloud environments, nodes are coming and going all the time, and this happens while we rely on DNS for things like nodes and service discovery. If we deploy HAproxy as a forwarder towards an address defined as FQDN (instead of IP address), the default behavior is somewhat unsatisfying. The software will cache the initial DNS resolution and will never attempt at resolving it. I understand the reasoning behind this but it very inconvenient.

Continue reading “HAproxy and DNS in the cloud”

A “DevOps team” work organization (I)

We often hear about what DevOps is or what are the tools to achieve DevOps in your organization (whatever it means), we know of Terraform and Cloudformation but we rarely see a definition of the principles behind the work organization of our teams.
At Curve, I was exactly hired to create and structure the SRE/DevOps team of the company. In this article, instead of the usual technical deep dive, I’d like to share some of the inspiring principles of the DevOps culture and of how they were adapted to define the work in a startup.

Continue reading “A “DevOps team” work organization (I)”

A “DevOps team” work organization (II)

This is the second part of an article about the work organization of my DevOps team. You can find the first part here

  1. Small batches of work
    Without entering the rabbit hole of the Toyota production system and the theory of the value stream, I remember how most of the IT professionals I’ve worked with have had an “in-house big side project”: a large refactoring of that part of the infrastructure or an update of all the operating systems in that part of the company network. It’s something they’re doing every day for a small percentage of their time. It’s hidden work as well, as usual, their managers don’t know about it. It cannot be measured, no one knows if/when it will be finished and ends up providing little value to both the company and the employee that gets no reward for the job done.  Continue reading “A “DevOps team” work organization (II)”

Postgres’s invisible data or the curious case of the intangible length

A few days ago at Curve, our developers had some problems dealing with data coming from our database and they asked for help. Apparently, a query that was working in dev (TM), did not work as expected in production.

Performing a sum on a certain set of rows was succeeding, whether a simple select was mysteriously failing. In fact, it was supposed to return an integer such as  “3” of length “1” was actually returning a solid integer “3” and length (“7“). Strange, isn’t it?

Continue reading “Postgres’s invisible data or the curious case of the intangible length”

Site Reliability Engineer – In Search of a Unicorn

At Curve, we’re rolling on the “Great Fintech Adventure”™  of revolutionizing the way in which you spend and manage your money. At its very core, the company is a blend of Finance and Engineering.  Two disciplines that get together to deliver at your doorstep the Curve card that you know and love.

Engineering is working hard these days to support and enable the organic growth of the team and, as part of our process, we’re constantly hiring new players that can help us go the extra mile and build amazing stuff!

We have openings for a lot of positions at the moment (btw, why don’t you join us ? but among them the hardest to find so far has proven to be the mythological figure of the “Site Reliability Engineer” also known, in the wild, as “DevOps or Cloud Engineer”.

So…What are they?

They are a peculiar breed of software developers; usually, highly motivated individuals not scared by the complexity of code or by the configuration nuances of an operating system; they are, instead, attracted by a blurring line between development and operations with strong fundamentals in both the worlds. Ideally, they should be equally comfortable debugging the interactions of a Docker container and writing a piece of code that automates a manual task.


Has this not always been the case? What’s so special about this?

This role can exist only in a world revolutionized, a few years ago, by cloud computing. A world where new technologies are launched daily, legacy is almost non-existent (and regulations and compliance are still not well defined). Cloud computing enables a company or even an individual to quickly rent a shared pool of virtual computing resources and scale them on-demand depending on the workload. It means that a company starting today does not need to buy upfront any expensive physical infrastructure, but it can “rent,” for a ridiculous portion of the price, resources on a public data center and scale them elastically accordingly to its need. This enables business models that were unthinkable only five years ago – every startup on the planet at the moment is trying to use this opportunity. But deploying on the cloud and scaling virtual resources is a complicated problem to master, it requires a person with a unique blend of skills.


So…DevOps ? SRE ? Cloud engineer? Greengrocers ? 🙂

In theory, DevOps is a larger movement that encompasses both a culture and a role, strengthening communications between the development and operations team and trying to automate the delivery process to be as fast as possible. Site Reliability Engineering instead is a specialization of DevOps that was defined in a famous book written by Google [2] and can be synthesized as “what happens when you ask a software engineer to design an operations function.” It focuses on designing and coding production system that respects their SLAs, however, obtaining this by sharing the same ideas and techniques of the DevOps movement. Truth be told, in a startup, such as Curve, the difference between these roles is so blurry, to be almost non-existent; nonetheless, we believe it is important to start defining the right culture and practices from the beginning. SRE felt like the obvious choice in an industry where the reliability of the product is of core importance, and there are heavy compliance regulations.


Nice, but what exactly are is the SRE team doing at Curve and why is it actually SRE?


At Curve, we are a very small team but we are involved in the design, and the scalability of every feature developed. We are not working, hidden, in the background: we are doing distributed systems engineering every day. We are doing operations, making sure our containers run on updated machines and operating systems. We are doing development by writing functions that work with our firewalls or provide insights and monitors the usage of the card and the reliability of the system and, if needed, notify our super valuable Business Operations team.  We are doing Site Reliability Engineering by looking and defining the SLOs of our systems with the developers that code them, and we’re managing them together. But we’re also cautious about security and compliance, ensuring that all the compliance and regulations requirements are indeed taken into account. We are alive, growing and kicking!


Ok, great, so why is it hard to find someone?

This, in fact, is a surprisingly complicated question, but in my view, it happens for many reasons some of them organic to software engineering others due to market and education:


  1. Blurring the lines – Historically the worlds of development and operations have always been separated “by a fence”: people with different skillsets and mindsets were managing different portions of the same product. They have always been focused on conflicting goals: creating features in the shortest possible time vs. keeping the system as stable as possible”. Requiring a change in this way of working is far from being easily understandable even by experienced individuals.
  2. Breadth over depth – In a world, where “breadth over depth” is key, it becomes very hard to find professionals whose depth is not too small and are simply unfocused and jumping from one thing to the next.
  3. Mindset – A large number of people very experienced in operations and now adapting to cloud environments are adopting these new technologies “as tools” without realizing the implications that they bring. They are changing their skillset instead of changing their mindset; “Learning how to use Terraform without understanding what the potential of Infrastructure as Code is and how it may help the developers to be faster at creating their features instead of only being used to manage the machines.” Still thinking that their role is to “operationalize the product” instead of being involved in designing it.
  4. Taking the plunge – Experienced  “IT pros” traditionally used to “isolated” systems management are professionally scared by learning how to code and how to work with developers, Agile and the company.  (If I had a penny for all the people that said: “ I do operations and use Python, but I’m not a developer”)  
  5. Universities – The shift towards cloud computing has been massive but occurred in a short number of years, and the universities are struggling in preparing experts in the field. There are only a handful universities in the UK offering a dedicated cloud computing module (among them City, is doing a good job at it [4]). As a result, most experts in the field are self-taught.  Junior DevOps engineers are having a hard time deciding which path to follow to become a recognized expert. There are only a handful certifications coming from different vendors but none of them is actually trying to teach or verify anything cross-platform.
  6. Money – Money – Money – A shortage of professionals in this field has increased the competition and thus the expense necessary for a company to acquire skilled workers; in a market where startups are no game for the bigger players.

So, what are you looking for, in the end?

The SRE team is already growing, but we’re always looking for someone that is ready to analyze, plan and maintain production systems as they scale in capacity and complexity. Someone that will refuse to do routine administration BUT will engineer an automated solution! Someone that will help the developers in defining the scalability requirements for a feature.  Are you interested in it? Does it ring a bell? Come and join us:





How to connect to a EC2 instance using Powershell

Hi guys, I don’t exactly know why but apparently there are no articles out there, with a good step by step guide to connect from your local pc to a Windows Server 2012 R2 instance hosted on Amazon AWS on EC2, this short article aims to fill this gap:


  • This article assumes some knowledge of AWS, the EC2 service and of Windows Server 2012 but nothing is complicated and I’ve added many links with large documentation
  • The Powershell comunication uses on the WinRM protocol, therefore  it needs a specific port reachable 5985 TCP  on the server (be advised, the default transport protocol will be the insecure HTTP).
  • The WinRM service is enable by default on WIN2012 R2 but the default Windows Firewall configuration is to allow connections on the 5985 port only from the same subnet of the machine, therefore we need to login to the machine using RDP an modify the default configuration of that firewall rule.
  • If your pc (the client !) is not part of the domain of the remote server you need to add the remote server into the list of your trusted hosts on YOUR pc (covered below).


  1.  Deploy the VM with Windows Server 2012 ( docs )
  2. Modify the security group of the instance, adding a rule to open the port 5985 TCP from your IP/ or from anywhere ( docs )
  3. Wait a few minutes for the machine to boot up completely and then connect to it using the Remote Desktop protocol (RDP), aka the usual way to connect to win istance on EC2 ( docs )
  4. Modify the Windows firewall configuration to allow incoming connections to the port 5985 from any ip (or as you please 🙂 ), to do so you can : Control Panel -> Windows Firewall -> Advanced Settings -> Inbound Rules -> “Windows Remote Management (HTTP-In)” where the profile is PUBLIC (make sure to choose the right one !) -> Properties -> Scope -> Remote IP Addresses -> Any IP Address (or know better ! )


    Use this simple Powershell command:
    Set-NetFirewallRule -Name “WINRM-HTTP-In-TCP-PUBLIC” -RemoteAddress “Any”

  5. Reboot the Windows Firewall service ( don’t ask me why, but sometimes the rules are not picked up until a reboot of the service, I’ve witnessed that myself ) (docs)
  6. Then make sure that the WinRM protocol is working correctly on the server machine running this comand in a shell (not really needed, just to make sure it works)  Enable-PSRemoting –force

  7. Then move to your local machine and make sure the WinRM service is working here as well, in a privileged shell:
    Start-Service -Name Winrm
  8. Then add the remote host as a trusted host, running this command into a privileged shell :
    SetItem WSMan:\localhost\Client\TrustedHosts Value “”
    or, possibly smarter maybe not super secure, use a wildcard :
    SetItem WSMan:\localhost\Client\TrustedHosts Value “*”

  9. Then connect to the remote machine using one of the various options provided by powershell such as :
    Enter-PSSession -ComputerName “” -Credential $(Get-Credential)
    Inserting the login credentials of the remote machine when requested to ( docs )


The WinRM service can be configured server side to use the more secure HTTPS on port 5986 or a COMPATIBLITY MODE running on port 80, used usually for firewall related issues ( docs).

Older versions of Windows have different requirements to set up Powershell Remoting / WinRM ( docs ).

And, obviously, this guide is generazible to many other IaaS services (Azure, Digial Ocean).

Hope this helps somebody !


Guide: Intel 82573L gigabit ethernet with Ubuntu 11.04 and fix PXE-E05

hello guys,

big post today. I’ve finally updated my Ubuntu machine to the latest version 11.04 Natty Narval…everything works out pretty well execpt for the wired ethernet controller… I’m using the

“Intel Corporation 82573L Gigabit Ethernet Controller”

this controller isn’t manageable via the usual Ubuntu Network Manager nor it’s listed in the output of the ifconfig and it’s status is unclaimed

$ sudo lshw -C network
*-network UNCLAIMED
description: Ethernet controller
product: 82573L Gigabit Ethernet Controller

There is no problem at all with Windows 7 or my old Ubuntu release 8.x , the card is fully working.

In the meantime I have noticed a long time recurring error (it was there for a long time before the  11.04) at the computer boot time (BIOS time),  looking like a bootstrap error :

“Initializing Intel Boot Agent GE v.1.2.28 PXE-E05: LAN adapter’s configuration is corrupted or has not been initialized. The Boot Agent cannot continue.”

the linux log messages helped me a little

$ dmesg | grep e1000
[ 0.267811] pci 0000:01:00.0: reg 18 32bit mmio: [0xee100000-0xee10ffff]
[ 0.268161] pci 0000:00:01.0: bridge 32bit mmio: [0xee100000-0xee1fffff]
[ 0.346430] pci 0000:00:01.0: MEM window: 0xee100000-0xee1fffff
[ 0.346978] pci_bus 0000:01: resource 1 mem: [0xee100000-0xee1fffff]
[ 0.918428] e1000e: Intel(R) PRO/1000 Network Driver – 1.0.2-k2
[ 0.918432] e1000e: Copyright (c) 1999-2008 Intel Corporation.
[ 0.918486] e1000e 0000:02:00.0: Disabling L1 ASPM
[ 0.918510] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[ 0.918554] e1000e 0000:02:00.0: setting latency timer to 64
[ 0.918766] e1000e 0000:02:00.0: irq 29 for MSI/MSI-X
[ 0.990779] e1000e 0000:02:00.0: PCI INT A disabled
[0.990781] e1000e 0000:05:00.0: (unregistered net_device): The NVM Checksum Is Not  Valid                                                                                                                                                                                                                 [0.990788] e1000e: probe of 0000:02:00.0 failed with error -5

I have also been able to understand that this problem is not limited to the 82573L card but is common for a large number of intel ethernet cards (that is why you can easily understand the driver’s blacklisting in the old linux distributions) : 82563, 82566, 82567, 82571, 82572, 82573, 82574, 82577, 82578, 82579,  or 82583 -based.

So what is going on ? It looks like that the nework adapter’s (82573L) EEPROM is broken, unfixed and a little messed up, (error PXE-E05) this problem creates a checksum error for the NVM (The NVM Checksum Is Not Valid) that breaks the Ubuntu driver loading  therefore the eth0 alias is not created and there isn’t a manageable ethernet adapter for the Ubuntu network manger.

Windows simply doesn’t check the NVM checksum, it uses the card anyway and everything works fine.

IMHO Intel messed up a little with the 82573 controller there are too many similar errors out there, it seems it happens when there is a sudden power outage during LAN card bootime…totally nonsense !

Anyway, we need to fix this !!! As you can easily understand the idea behind this guide will work  for many others Intel controllers from the same family: I’m unable to test them but it’s probably worth giving it a shot !

And here it’s the guide…it’s not as long as it seems :

We need to remove the old 82573L driver, install the updated 82573 network controller driver, create a MS DOS boot pen drive, reboot, flash the card eeprom, and reboot again (there are a lot of subguides to ease the process for newbies USE THEM)

open the terminal ( )

$ sudo rmmod e1000e                       # unload the old driver module

$ sudo rmmod e1000                        # unload the old driver module (errors are OK)

$ sudo rm /lib/modules/2.6.38-8-generic/kernel/drivers/net/e1000e/ -rf                  # remove old drivers (errors are OK)

$ sudo rm /lib/modules/2.6.38-8-generic/kernel/drivers/net/e1000/ -rf                  # remove old drivers (errors are OK)

download and extract to your home directory the latest intel drivers from their site ( )

make sure you have installed build-essential ( )

terminal again and cd to your home directory

$ cd       # to your home directory

$ cd e1000e-1.3.17/src                          #to the extracted drivers directory

$ sudo make install                                 #to install the drivers ( no errors on this side)

now we need to go to the intel site, download and extract the Intel(R) Ethernet Connections Boot Utility, Preboot images, and EFI Drivers ( )        then prepare a MSDOS bootable pen drive and copy the extracted files we just downloaded to the pen drive.

Several ways to create a bootable MSDOS pen drive the Windows way  (PREFERRED) and the Linux1 , Linux2 and Linux3  way (should I use now the saxon genitive, now ? ) choose your favorite one but always  REMEMBER TO PUT THE EXTRACTED FILES TO THE PEN DRIVE.

Now go reading the important NOTE at the end of the page containing the disclaimer !

Now boot using the pendrive ( SUBGUIDE ) and assuming you’re at the command prompt:

c:\>  cd bootutil                        #go to the bootutil directory

c:\>  bootutil -defcfg          #force bootutil to load the default PXE configuration into the controller

# Georgi says ” bootutil -nic=1 -defcfg” it’s better. Try if the other returns an err

after that reboot the pc, and go back to Ubuntu.

Now everything should be working fine.

DISCLAIMER: You probably need to know that the Intel(R) Ethernet Connections Boot Utility WAS NOT designed to be used with on board (also know as OEM) lan cards (is for the PCI cards) therefore there is no sure way to predict it’s interactions with others on board components like USB or SOUND controllers.  I haven’t experienced any problem with my computer and I haven’t seen any negative review using Google (HP dv6000) but there is no way to be 100 %. What I can tell you is that procedure is the only way to make the cards working otherwise you need to buy a new external card.  Eventually use at your own risk and patrol

As usual hope this was helpful to somebody.


Ok, flash post today.

Today at 1:30 AM PST Gmail suddenly stopped working. It seems it is unreachable. Google Apps linked applications are also NOT working.


The Google’s Gmail support team says the following :

We’re aware of a problem with Gmail affecting a number of users. This problem occurred at approximately 1.30AM Pacific Time. We’re working hard to resolve this problem and will post updates as we have them. We apologize for any inconvenience that this has caused.”

Anyway, for all those who are experiencing technical problems accessing Gmail or Gmail is absolutely not working, there is a simple workaround :


What, the hell, is IMAP ?   Just don’t care ! Using gmail with IMAP means : use gmail with Outlook Express, Thunderbird, Mail, Iphone MAIL or any otherIMAP client.

These are some useful links  to configure the main mail clients with GMAIL via IMAP:




Anyway, to configure any client with GMAIL the required settings are the following:

Incoming Mail (IMAP) Server – requires SSL:
Use SSL: Yes
Port: 993
Outgoing Mail (SMTP) Server – requires TLS: (use authentication)
Use Authentication: Yes
Use STARTTLS: Yes (some clients call this SSL)
Port: 465 or 587
Account Name: your full email address (including Google Apps users, please enter
Email Address: your full Gmail email address ( Google Apps users, please enter
Password: your Gmail password

Anyway I’m pretty sure everything is gonna be alright in a couple of hours…


As usual hope this is helpful to somebody.

How To Boot and Run Linux from a USB Pen Drive [Easy Way]


this time I’m gonna show you two easy ways to install and run two common linux distributions from a pen drive using both Windows Xp or Vista or a Linux Distribution, and wizard procedures. It’s like a “Live” distribution, but from a USB drive. The two distributions are Fedora Core 9 (or 8 ) and BackTrack (what is BackTrack ?!? Take a look at this ), both with the persistence feature…what is the “persistence feature” ??? Well, is the possibility to store the changes you’re making to the system (in fact, it’s NOT exactly like a Live distribution)…anyway…

Fedora Core 9 (2 Gb Pen drive is enough, 4 Gb is better )

The Windows’s Way…

1. Insert the pen drive, and make sure it’s empty

2. Download the lastest version of the “liveusb-creator” from here then extract the zip file to a directory and run ” liveusb-creator.exe ”

3. Choose the Fedora Distribution you prefer ( the 9 is really better ! ), choose the USB Drive, then the dimension of the “Persistent Overlay”, the space left to store the modifications to the system, or the files you make. (for 4 gb units, “choose the greatness”…2047 Mb, for 2 Gb… 1024 Mb…less or more… )

4. BUTTON : “Create the Live USB”

5. Wait a couple of hours…

That’s all…now you should have your distribution full working…

The Linux’s Way…

1. Insert the pen drive, mount it and make sure it’s empty

1b. Install YUM if you haven’t got it yet… (your system probably has it already installed…)

2. Open a terminal windows or get a shell, go to an empty directory ( you need a couple of Gb’s free storage space), and give

# yum -y install syslinux PyQt4 git $ git clone git:// 
# cd liveusb-creator 
# ./liveusb-creator

3. Choose the Fedora Distribution you prefer ( the 9 is really better ! ), choose the USB Drive, then the dimension of the “Persistent Overlay”, the space left to store the modifications to the system, or the files you make. (for 4 gb units, “choose the greatness”…2047 Mb, for 2 Gb… 1024 Mb…less or more… )

4. BUTTON : “Create the Live USB”

5. Wait a couple of hours…

That’s all…now you should have your distribution full working…you SHOULD…this is still beta version software…so no one is really sure about that…

Backtrack 3 Final (2 Gb Pen drive is enough, 4 Gb is better )

The Windows’s Way…

1. Insert the pen drive and make sure it’s empty

2. Download the [ USB Version (Extended) ] from the official site

3. Extract the whole .iso file to the USB Drive (feel free to use Winrar )

4. Then open the root folder of the drive (ex. G:\ ), go to “boot” folder

5. Run bootinst.bat and follow the onscreen istructions (just press ENTER if everything is ok )

6. That’s all…. should be working…

The Linux’s Way…

1. Insert the pen drive, mount it and make sure it’s empty

2. Download the [ USB Version (Extended) ] from the official site

3. Extract the whole .iso file to the USB Drive (feel free to use the extractor you prefer…like Ark)

4. Then open the root folder of the drive (ex. /home/media/usb1 or /mnt/sda1 ), go to “boot” folder

5. Run (double click on the icon or “./” to the shell) and follow the onscreen istructions (just press ENTER if everything is ok )

6. That’s all…. should be working…

Hope this was useful…. Happy Holidays…