The Raspberry Pi, a $35 credit card sized computer, is a popular choice for home automation projects. I am running few of those at home myself. But what is the true cost of these, is the advertised cost of $35 really all you spend? Before you know it, you throw in more bucks for a decent power supply, SD card, enclosure, a WiFi dongle and soon end up somewhere in the $70-$100 range for a headless configuration, that including VAT and associated shipping costs for the components required to get it running. Then a new model comes out and here you go again.. I now have couple of the early models retired in the drawer, collecting dust. How is that green (manufacture, shipping and then disposal of millions of units) or cost effective? Thinking along these lines I decided to try Amazon’s Web Services, and more specifically the Elastic Cloud Computing EC2 service. The AWS free tier offers a great opportunity to test things out for one year then decide how to go forward. My interest is for a t2.micro instance, all-upfront 3 year payment costs $179. That makes 4-years (1yr free tier+3yr paid) of your own cloud virtual machine running Ubuntu for $3.72 a month. A good deal I say, and you have nothing to lose as you start with one year free trial before you decide.
What are advantages and disadvantages of moving your home automation to the cloud?
– It is obviously greener. Imagine the pollution and resource use involved in manufacturing, shipping and eventually disposing of millions of these devices (5M as of Feb 2015).
– Scalability. Well if the t2.micro instance starts to be a constraint, few clicks can upgrade you to a more powerful version. Running low on storage? No problem, just add more storage space and you are good to go. Easy indeed. One of the main reasons for me upgrading was the increased amount of RAM or faster CPU available on the newer models of Raspberry Pi.
– Stability. We have all seen the SD card wear damage on the Raspberry Pi that occurs if you use the file system actively, and that has been a massive PITA. Can’t really have a DB running reliably without fearing data loss or doing some major trickery to reduce disk writes. Another thing is you never fear power outages, those are quite frequent where I live. Creating online backups is easy.
– Easy cloud deployment; Build and save AMIs then share those for others to use with few clicks. Want to have a Ubuntu VM running Apache+MySQL+PHP+emonCMS+mosquitto MQTT broker+Node-RED+openHAB? You can have one in 30 seconds, If I share a pre-configured AMI ID with you.
– Easy access. Many ISPs make it hard for clients to run web services from domestic IPs. Running your cloud VM with Elastic IP solves that problem
There are of course disadvantages as well such as
– Cloud vs local hardware when Internet connection is down. Locally running machine can still handle the home automation, whereas cloud running VMs mean your home may be going wild.
– Locally connected peripherals like RFM2Pi board that routes wireless packets to the the respective gateway of choice can’t run on the cloud. Yet. A RFM12/RFM69 + ESP8266 bundle is fully feasible and will remove that shortcoming.
– Many people mention security when using cloud services, I believe this can be resolved with appropriate measures.
Overall I am not ready to go completely to the cloud for my home automation system. I am running a hybrid solution now with a (few) locally running Raspberry Pis handling the crucial mission critical home automation functions, then forwarding data over MQTT to the cloud based VM for storage, visualization, remote control.
Comments?
I thing you are overstating a bit. First of all, there are no real need, to add WiFi dongle just plug in to your WiFi router, and for sure some good 5V PS is something already available in any home. As per SD card problems, there are a lot techniques to mitigate the problem.
Finally, the life cycle of the Pi can be more than 5 years (just for example I’m running mine as of initial 256 Model B version) and as we are speaking I’ve just realize, that may be is a time for upgrade.
And as a closing, you haven’t consider the existing local Internet infrastructure ensuring communication with the “cloud”.
But for sure the cloud is something that may be considered – and if you perform hybrid approach – partial online data in your home and statistical (Data-warehouse lets say ) in some cheap web hosting account, you may get long-running PI just for processing and something like 20$ /year in the cloud 😉
Good comments, probably your system runs well on the 256MB version, but for me Node-RED alone requires 128MB. I’ve given up on running my Pis in R/W mode 24/7 as I would inevitably end up with badly corrupted SD card within couple months (5 pcs so far), not to speak about the loss of data. That limits the type of software you can run on it, hence frustration with the whole concept. Currently my Pis run in R/O file system mode with /tmp mounted in RAM and act as local business logic processors and forwarders to the cloud. Web hosting account solves part of the issue, a good point though. I do run that way currently and my home Pis forward to my web hosting account where I host this blog. A VM offers much more control to that, allowing to run your own setup with full root permissions, so having one for less than 4 bucks a month is well worth it. I’ll be moving in that direction..
It has been little over 3 years since the first 256MB version of the Pi came out, and there are already 13 revisions http://elinux.org/RPi_HardwareHistory#Board_Revision_History . I’ve resisted buying the v2 when it came out couple months ago as it doesn’t really address the issues I outlined above
Martin,
I have been running hybrid (aws) for 4 years now, I still cannot remove all localized cpu’s – its not that easy to partition a heterogeneous HA system it seems.
My view here is – get quality components, and buy good mechanical HDD’s (or SSD) and just boot from SD only – that resolved the big issue (with rpi), (at the deficit of a little more power).
You absolutely need a stable psu, most people spend twice (with regret) on these. Use a good psu (use an old laptop psu with a switch mode stepdown) and save yourself some cost (you can get them for free from a lot of computer repair shops). They are good for multiple Pi type cpu’s.
Most importantly, use a good internal DNS and cloud based API’s, this means you can re-partition your local hardware without the need to upgrade in many cases, and afford for localized redundancy (all at the expense of localized fwd planning).
In summary the hybrid gives some redundancy that would be more complicated to DIY, its upgrade path is simpler (but using cloud api’s locally can help). But mostly its removing the non-realtime data processing/analytics away from the home, like emonCMS, and keeping the others closer at hand, like MQTT/OpenHAB.
Of course some of the xtra gear we all have is because we like to experiment, but the “core” working gear (cpu) is hard to dispose of totally.
External HDD (spinning) is definitely going to work out much better; I second you on the good PSU as well. Throwing in an old phone charger leaves you disappointed sooner or later. So, yes – a setup like that works for small scale HA projects consisting of single home. But also think about the case where you would like to manage more properties and things get a bit tricky. What I wanted to achieve is central cloud based business logic that pulls the strings of the silly little actuators back at home (and possibly other locations), but can’t see it happening reliably without -as you too say- locally running CPUs. Maybe implementing some basic “reflexes” of the actuators so that they fall in safety mode until they hear from the central brain would help. I have a friend who wanted to automate a modern apartment building with 50+ apartments, including utility usage, security, disaster alarms etc; That would be a good example of bigger scale automation that we will probably see more and more often in the years to come, and that would call of a different type of solutions.
But also think about the case where you would like to manage more properties and things get a bit tricky….
Thats where the cloud based api’s help. With the relevant private networks spanning these properties through the cloud you treat the sum of disparate parts as a homogeneous whole. Pi’s running raspbian/docker are treated no differently from BBB running Arch etc. (GPIO exposure excepted atm). The point here is to be able to easily replace a node (A) running software (a) with another node (B) running similar software (api compat) – especially after failure. Agreed you have the cloud layer to contend with, and that can be a problem on a 256mb pi, but projects like Hypriot / Ubuntu Snappy are helping on the efficiency front. My view now is that each compute node even on the same physical lan should be treated as if it was a remoted resource.
BTW: have you ever consider banana-pi (or other Pi-like boards) since it had SATA port.
Together with GB network card, it can run local NAS on NFS for all other Pi’s – this can solve the problem with SD wearing completely.
Considering your high level of expertise, I’m sure, that smaller community compared to Pi’s one shouldn’t be a problem for you.
What I’ve looked at previously is the A20-OLinuXIno-LIME2-4GB. I’ve experimented also with booting a Raspberry Pi from an SD card that holds an initrd enabled kernel. Once the kernel boots and hands over to the initrd a wireless network link is brought up and then the root filesystem is mounted from an iSCSI target. Once the root filesystem is mounted there is no further interaction with the SD card. That’s a rather complex solution, and it too prone to network issues.
The “ideal” solution is yet to be found 🙂
Hello Martin,
Just “bumped” into your site from a reference on OpenEneryMonitor. Very nice articles.
Re virtual v physical: your points are spot on – home automation solutions will inevitable move to a hybrid of local and remote resources.
The versatility of e.g. the Raspberry Pi allows folk to fall into into what I consider easy but “dead end” solutions e.g. adding a local hard disk of some sort.
It also helps conceal the ideal architecture of the whole solution i.e. separating the “compute” components from the “storage” components.
By that I mean we will have more and more “compute” resources such as sensors, actuators, whatever which only collect or receive data transiently: it will be up to some other “storage” component to store interesting data reliably – in your case sending the data via MQTT to an EC2 instance backed by some sort of database (or maybe even S3?).
I do think though that some sort of network-local “data aggregator” will become popular to ride out times when the cloud is not available – think of this as a tiered approach to storing data reliably.
I’m new to the embedded scene but do think we can use some of the lessons and technologies from mainstream IT to produce “bullet-proof” software running on sensors, etc.
We can expect AWS and the like to start adding value added IoT services for receiving, managing, analysing and reporting our data (sort of like emoncms.org).
BTW Security is no more of a technical issue for cloud as it is for “on premise” – but we do need to convince people of that.
Great comments. Indeed my goal is to have the storage component based in the cloud for a number of obvious and not so obvious reasons, a lot of that has to do with the scale of things. You can have one setup for handling basic home automation, but a better architecture would allow handling multiple locations (suburbs home, seaside villa, office, apr in the city etc).
What I believe will work (and you seem to have reached to that conclusion as well) is to have 1) a local buffering gateway that collects sensor data and attempts cloud upload if connection is available and 2) reflexive behavior of sensors when connection to the central cloud business logic is not available (e.g. turn off water supply in case of flood detection without having the cloud business logic tell us to do it). MQTT QoS levels allow broker side persistence of message and passing them upon connection restoration, so no need of explicit buffering at that end.
Agree on the security comment, folks still believe in security through obscurity. Yet these concerns do have grounds: think that it makes more sense for black hat folks to attempt attack cloud hosted data of IoT SaaS/PaaS providers as they get greater “RoI” sort of speaking compared to hacking into someone’s connected kitchen light bulb.
We seem to be very much on the same page wrt IoT.
Going back to security, agreed wholeheartedly security through obscurity is no security at all.
In my corporate career I’ve managed security teams (but was / am not a security specialist). But, even though a security amateur, I do understand AWS’s secuirty mechanisms from an engineering perspective and have no doubt I could build a cloud based solution as secure as many corporate environment.
But now you’ve made me worried about black hats going after my kitchen light bulb so I’m intending to make it a “honey pot” 🙂
Best regards.
Hello Martin,
I like the way you are thinking about the costs and want to share my thoughts that can be helpful. Instead of AWS I am stick with DigitalOcean where you don’t need to pay $179 in advance and monthly cost is only 5$. With referral link I mentioned you can get $10 that will be enough for two months tryout.
Regarding Apache+MySQL+PHP+emonCMS+mosquitto and MQTT broker+Node-RED+openHAB I recommend to pack all this in Docker containers to share with others via Docker Hub.
I do have a VM over at DigitalOcean and think it is much friendlier and has no hidden costs. I’m planning to move this blog to that VM sometime soon.
Regarding Docker – this is what I’m seriously considering and actually doing tests right now. Seems absolutely the way to go.
Cheers!
Hi Martin,
Very interesting perspective. I feel very inspired to conduct a similar experiment 🙂
I’m curious about some of the technical details though, since I worry a bit about the security of such a setup. Being in the cloud, and I’m guessing you’re not building an isolated WAN with enterprise VPN/Networking equipment, you need a secure way to pass messages between the cloud vm and your LAN over the Internet. So my take would be to run an MQTT broker on both the cloud vm and on some local device. Then bridge the two brokers and protect them using TLS. That way local devices may be a bit more loose on security measures and it solves the problem of getting messages from the cloud and back to the lan as well (I guess that’s why all the mainstream IoT hardware vendors have these ‘hubs’ and ‘gateways’). It also enables local m2m without internet connection, although with degraded logic if, say, your node-red is hosted on the vm.
Can you please shed some light on the architecture? I would love to get more ideas on how to go about this while still maintaining an adequate level of security. Thanks.
Hi Lars,
My current setup pretty much as you describe it: a Node-RED running on a local RPi in R/O file system mode acting as forwarder/fail-safe handler to cloud hosted MQTT broker over TLS+NR+EmonCMS.
I also have some ESP8266 nodes that directly connect over TLS to the cloud hosted MQTT broker, without the need for the intermediate gateway. That’s pretty cool as well with many advantages and some disadvantages as solution.
While TLS addresses security of data in transit, data at rest also needs protection. Hackers would most likely target that data and not so much attempt MITM sniffing.
I haven’t done encryption of the data yet, but one approach is to have the data in the database encrypted and so to protect it from eventual unauthorized access. Reporting/analysis tools shall require password to decrypt the data on the run.
Cheers,
Martin
This is well worth reading:
https://www.justinribeiro.com/chronicle/2012/11/08/securing-mqtt-communication-between-ardruino-and-mosquitto/
Pingback: My own cloud version control tool | Martin's corner on the web