Into The Box Notes: Bringing Legacy Apps Back To Life with *Box Micro-services, Brad Wood and Jon Clausen
People are writing “new apps” in CF but the reality is, a lot of them are legacy apps
when you have a language that’s 20 years old, statistically a lot of the code will not be “fresh, new” code. a lot of time we have to work with an older codebase.
We want to get off of “legacy hell” but don’t know how to do it?
How do you eat an elephant? One bite at a time.
Spaghetti Code (procedural)
quick an dirty
how many started writing CF
can’t scale that
early 2000’s - ColdFusion MX, we got Objects and CFCs
added on top of the procedural
still monolithic app design
but at least we had a few “chunks” of stuff
but still difficult to scale
2005-ish - MVC frameworks
separate the concerns
helps enforce OOP
could have specialized team members
- front end guy in the /views folder
- back end guy in the DAOs,etc
tight coupling between layers
add more layers for organization
still tightly coupled
still 1 giant app
sometimes they have a huge surface area
still difficult to scale
Modularity Ensues -
small, container pieces
can have different teams of people working on different parts of your app
easier to distribute.
a monolith can only scale UP
- when you have a server and you just keep adding hardware to it
- you can’t scale these OUT
where is the dividing line where we can break pieces off and separate them out
the smaller the pieces you have, the easier you can scale out
“cohesion and coupling”
usually referred to OOD
cohesion - like-minded pieces of the app, that should come together
- if we have a lot of “things sending mail” in the app, those are cohesive. they’re 1 similar “concern”
coupling - how tightly 1 piece of code is dependent on another piece
broke out “things’ into their own modules
core is now much smaller
less coupling in the core
admin - can be uninstalled/deleted from contentBox so the prod server is more secure
/api - put it in it’s own module so it can be uninstalled/deleted for security if need be.
Lots of evolution of the tooling over the years
open source engines, shared experiences, etc
“i think i’ll just old off and wait for the NEXT new thing”
reality: with legacy apps, there has never been a better time to look at Service Oriented Arch and Micro Services Patterns in our applications.
big monolithic app
if it doesn’t perform, throw bigger hardware at it
if it still doesn’t perform, throw MORE bigger hardware at it.
didn’t have ability to deploy “small, bite size portions” of our pap to different containers
time to break that monolithic cycle
move to an idea of micro-services architecture
not an easy transition
the mental model we started w/ is often set in stone
have to re-learn some ideas about “what good looked like”
a fundamental change in approach or underlying assumptions
won’t be an easy change at first
we move from “big hardware” to “platforms as a service (PaaS)”
amazon web services
Google app engine
In addition to PaaS, web space has blown up with “Infrastructure as a service”
Can develop your own PaaS for less than what you might pay a 3rd part PaaS
that paradigm shift:
less is more
we don’t need to make ‘big things that do a lot”. sometimes we can make “small things”
small things can be deployed on smaller commodity hardware
as a whole, use fewer resources than if we’d packaged them all together in the monolith
can reduce the requirement of your app
sometimes the “extra resources” used in the monolith are just decided to “fault tolerance” which isn’t as needed in the smaller Micro-Service
apps: used to be “collections of functions”
now: “collections of -parts-“
clients are used to “big release cycles”
from an agile standpoint, breaking things into smaller components,
allows us to have very small, short release cycles
fits well into Agile methodology
hardware independence -
your apps should be portable
with modularity (javaloader module in ColdBox, for ex)
can eliminate dependence on engine/hardware
handle loading classes at run-time
ORM libraries help too-
abstract out the database layer
less coupling to 1 database
disposable instances: scale on-demand, teardown when demand decreases
really only need micro-services for a PORTION of the day/week/year, not always truly needed 2/47. Intranet app: when the company goes home at 5pm, the server does nothing any more, for ex.
think about “disposability”
scale servers up/down as needed
from legacy cfml apps, what are the steps we need to change?
1. identify performance pain points and bottlenecks in your app
your bug tracker app will tell you were the problems are
as you make a list, prioritize which ones are biggest/lowest priority to eliminate
2. take those bottlenecks, one at a time
build test and deploy new micro-services to deliver the functionality of your pain points
independently from the monolith app
build, deploy it
3. then update the end-points in the monolith to use the micro-service
eliminates code from the monolith
as you deploy the micro-service, UI becomes more responsive to the user
4. rinse and repeat, with additional bottlenecks and pain points
Tool set to do that —
ComandBox - can scaffold out modularity, makes easy to deploy apps that are modular in nature and can be deployed as micro-services (ala the ContentBox admin that can be deployed separate form ContentBox itself)
CFConfig - xfer config between servers easily.
documents how your micro-service will be consumed
many packages up there now, and growing
lots can save you tine in reinventing the wheel.
You have to decided what the right tool for YOU to use is.
right tool might be a bash script, for exa
but time and time again, most monoliths will be best served by bringing them into a modular MVC like ColdBox
once we have micro-services, how do we automate deployment
we have ability to deploy services anywhere
containers are ubiquitous
can decided how big/small you want to go, how far geographically you want to distribute your containers.
what do we get from this?
makes you platform independent
reduces time-consuming dev-ops tasks via automation
gives you health checks and rolling updates, allow for zero downtime deployments
- old container stays in place until the health checks pass for the new container
build-in emergency procedures and configuration settings provide better application fault tolerance
if you’re not using automation, creating additional pain points for yourself that are not necessary i this day and age.
portable between platforms, machines, data centers, and geographic regions
built-in security and isolation.
- only exposing certain “things”
if that container is compromised, it doesn’t expose the host
containers are transients, not singletons
available on demand but not singletons like the big servers we’re used to in the past.
Isolate all the things
instead of one big thing, doing many things
we have many things, each doing 1 thing very very well.
if we do this one small piece at at time
we can evolve into better quality applications
embrace microservices architecture