When I was younger, in my family, I was assigned the task of watering the flowering plants around the house. Thinking back on it now, there was easily 50 plants of all shapes and sizes. So, I would have to shuffle around the yard, bucket in hand, dipping and watering. Some plants would get two dips, others one. I couldn’t use the hose, because that might damage the roots of the younger plants. I hated it.
Ever the creative, I used to come up with outlandish ideas to solve the predicament. Sadly, I never implemented any of them. Thus, I was left to water these plants by hand.
Last week, for Caribbean Developer Week, I came up with a demo, featuring Azure Functions, that is the nearest to a solution to my plant watering needs back then that I have ever come.
I built three Azure Functions:
Setup Waterer actually created more Azure Functions. Those would be Timer functions, each potentially able to run their own schedule.
GuidEnqueuer, alas poorly named, but good at pretending to be a plant food source, would receive an Http post and enqueue it. Plant Waterer would pick this up and display on a console. No actual plants benefited from this demo.
As I gushed previously, I created the Setup Waterer function on top of the Azure Fluent SDK and it worked fine. Functions making functions. That’s what I wanted to show really, and things worked well.
I wanted to create a few storage accounts for students in my class to complete an assignment featuring Event Sourcing and Material Views.
So, here’s what I did.
Download/install the latest azure command line interface (cli).
(While doing this, I realized I could have just used the cloud shell. I soldiered on with the dl)
Create a resource group to contain the accounts we’d need.
Create the accounts and output the storage account keys
The command to make a single storage account is pretty straightforward:
But I wanted to also output the keys and display them on a single line. The command to get the keys after the account is created is this:
So, I used the jq program in bash to parse the json result and display both keys on a line. Thus, I created a script that would create the accounts and then output their storage account keys.
This is the script that produced the accounts and keys:
Overall, the longest part of the exercise was dealing with the way the files were being saved in windows vs how they were being saved and read by bash. But the accounts were created and class can get on with assignment 2.
In Cloud Technologies class today, we used both the course outline and the notes from MSFTImagine’s Github repo to talk through the differences in service offering.
I used the canonical service model responsibility chart to start the conversation off.
It’s fairly straightforward to talk to these divisions, of course. I often use it to drive home the NIST 2011 definition of cloud services. With emphasis on the service delivery models.
In today’s presentation, one of the things that jumped out at me was the slide that provided a distinction between SaaS Cloud Storage and IaaS.
Finally, when talking about the ever versatile Salesforce, and how its PaaS solution works out it reminded me of the Online Accommodation Student Information System (OASIS 🙂 ) that I had built when I was in undergrad.
I’d built OASIS as a commission for the Office of Student Advisory Services. It was a tool to help off-campus students more easily find accommodation. Prior to OASIS all the information was a notebook in an office. It was built before I learnt about the utility-based computing of cloud. I’m thinking about using that as the basis of an exploration of the architectural changes need to move an old service to the cloud.
Hopefully, I’ll be able to revisit it when we touch on Cloud Design Patterns.
Started back with the UWI Cloud Technologies course today. This class was an Introduction to Cloud generally, with some conversation about the course outline and expectations for assignments.
We still in the process of confirming the course outline, so I’ll share that next week. But I used the slides from the technical resources provided by the Azure Computer Science module on cloud technology.
On my way to class I met up with Naresh who runs the UWI’s Department of Computing and Information Technology servers. He gave me a quick tour of their deployment. I’m looking forward to him sharing some stories from setting up that environment in our IaaS classes in a few weeks.
My central treatise was that entities are moving away from simply Cloud-enabling existing solutions and having the Cloud as a backup. Analysts, architects and developers are strongly moving towards building solutions that are native to the Cloud.
When we teach on Cloud Technologies at UWI, we start with building the context.
Many students come to class with ideas about what cloud is from their experience with service providers. Perhaps they use Gmail for email, or One Drive for storage. What they know is that some provider manages the concerns attached to a service they use.
We start off with a definition investigated and shared by the US National Institute of Science and Technology (NIST). As outlined by NIST, when they developed their definition, they were going for a yardstick that could allow consistent comparison among service providers, deployments and strategies.
According to NIST, cloud computing is a model. A way of thinking with regard to requesting, deploying, managing and monitoring computing resources. This model anticipates broad network access as the means to interacting with service providers. The resources being provisioned are expected to be shared.
In the NIST version of the cloud model,
There are 5 key characteristics.
There are 3 service models.
There are 4 deployment models.
When delivering this understanding we tend to repeat those characteristics often. They are:
Services being built today may feature the use of cloud resources but themselves may not actually be cloud services. Walking along with those characteristics, we encourage students to ask, “can I order this up, pay and have it available, without human involvement?”. Then it’s self-service. Can service modification be done over a network? Does the underlying infrastructure automatically assign and un-assign the use of resources? Does the billing reflect up-to-the-minute information on when a customer provisioned/de-provisioned services?
Those are the starting questions that can be used to evaluate services when spotted, or as they are being built.
The service models are Software-, Platform-, Infrastructure- as a Service. The concept of “[x] as a Service” is essential to understand when considering the degree of abstraction.
Service models let a person evaluating or implementing cloud to know what will be in their hands to manage vs what is being managed by a service provider.
And finally in the NIST-defined model of cloud computing there are the deployment models – private, public, community & hybrid.
We start with this definition because from there we expect students to be understanding how in an MSc focused on Cloud Technology, they need to be oriented.
There are two forms of orientation I’ve experienced. It is knowing the responsibility for developers & software designers to build with cloud in mind, paying attention to relevant patterns & principles, such as service orientation.
And it is also the great empowerment one should feel when embracing use of these services. Literally, the savvy can provision servers, services and other resources via code. That is tremendously advantageous. The course is about exploring how so.
When I teach these courses, I come at the students as someone who’s been witnessing the change in how software is conceptualized, built & delivered. It’s hoped that from the experiences shared, they’d be empowered to jump in, and take part.