Spring Boot and Kubernetes
Service discovery in Kubernetes
You can use internal DNS in Kubernetes or you can use the same service discovery approach like in PCF (via Eureka) in Kubernetes as well. since Kubernetes offers another option to do service discovery, I will show that one in my example. We will deploy an instance of our weather app in Kubernetes in Listing 26 with some prebuilt containers. We will use the description (weather-app-v1.yaml) from Listing 27.
Listing 26
1
2
3
4
5
6
| kubectl create -f weather-app-v1.yaml kubectl get deployment kubectl describe deployment weather-app kubectl expose deployment weather-app --type= "LoadBalancer" kubectl get pods -l app=weather-app kubectl get service weather-app |
Listing 27
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| apiVersion: extensions/v1beta1 kind: Deployment metadata: name: weather-app spec: replicas: 1 template: metadata: labels: app: weather-app spec: containers: - name: weather-app image: mgruc/weather-app:v1 ports: - containerPort: 8090 |
You can call the weather service now by ‘http://:8090/weather?place=springfield’. The weather service discovery information will be available inside Kubernetes under WEATHER_APP_SERVICE_HOST and WEATHER_APP_SERVICE_PORT. That means we will remove the service discovery annotation from our application and our rest controller and just use the variables.
Listing 28 shows how you could use that in a Spring Boot app.
Listing 28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| @RestController public class ConcertInfoController { @Value ( "${WEATHER_APP_SERVICE_HOST}" ) private String weatherAppHost; @Value ( "${WEATHER_APP_SERVICE_PORT}" ) private String weatherAppPort; @Bean RestTemplate restTemplate(){ return new RestTemplate(); } @Autowired RestTemplate restTemplate; @RequestMapping ( "/concerts" ) public ConcertInfo concerts( @RequestParam (value= "place" , defaultValue= "" )String place) { // retrieve weather data in order to add it to the concert infos Weather weather = restTemplate.getForObject( "http://" + weatherAppHost + ":" + weatherAppPort + "/weather?place=" + place, Weather. class ); ... } } |
So let us deploy the concert app as well (Listing 29, concert-app-v1.yaml) by executing ‘kubectl create -f concert-app-v1.yaml ’.
Listing 29
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| apiVersion: extensions/v1beta1 kind: Deployment metadata: name: concert-app spec: replicas: 1 template: metadata: labels: app: concert-app spec: containers: - name: concert-app image: mgruc/concert-app:v1 ports: - containerPort: 8100 |
The concert app is able to find the weather app, because the vars WEATHER_APP_SERVICE_HOST and WEATHER_APP_SERVICE_PORT are automatically injected by Kubernetes. That is just one of the options to implement service discovery within Kubernetes. If you want to try it from outside, check the public available IP (see Listing 30). The service is now reachable from outside by for example:’curl http://:8100/concerts?place=hamburg’.
Listing 30
1
2
| kubectl expose deployment concert-app --type= "LoadBalancer" kubectl get service concert-app |
Treat backing services as attached resources
This principle also originates from the twelve-factor app and comparable principles are known since a long time. Any resilient system should support graceful degradation. That means if a service dependency is missing, the service should not crash. Instead it should offer as much functionality as possible. If our service only works after a login, but the user database is not available, we will maybe have difficulties to offer something useful, but there are other use cases where we can follow the principle.
A great and lean library to support this in Java is for example Hystrix. Hystrix is a circuit breaker. If a service is not responding any more correctly, he will open the circuit for a given time and not propagate requests to that service any more.
Instead we can define a fallback method. By using that principle, errors or loops will not be propagated to next service layer, instead he will break a possible loop or error propagation. Listing 31 is an example for this.
Listing 31
1
2
3
4
5
6
7
8
9
10
| @RequestMapping ( "/hello" ) @HystrixCommand (fallbackMethod = "hellofallback" ) public String hello() { String aName = .... complex logic to find a name return "Hello " + aName; } public String hellofallback() { return "Hello world" ; } |
In my example, if the weather service it offline, then the concert app can still provide concert information, see here for more information.
Execute the app as one or more stateless processes
Another principle from the twelve-factor app is important for microservices and the reason is obvious. If you want to scale at anytime and if you expect your service instance to be moved from one host to another anytime (e.g. auto failover), then you cannot have a state in your application. Any data that needs to be persisted must be stored in a backing service.
There are 2 major design mistakes which often result in a state:
- Sessions managed by the application server
- Local file persistence
Moving the session from the server into a database like Redis is simple. Apart from some dependencies, you often need just one annotation to do this in Spring Boot. If you use spring security and have web security enabled (@EnableWebSecurity) and you want to store your http sessions in a database, then Listing 32 is all you need to store your sessions in Redis.
Listing 32
1
2
3
| @EnableRedisHttpSession public class HttpSessionConfig { } |
To demonstrate this, I will add another app to our weather-app and concert-app. It is a chat-app. In order to use the chat app, you have to login. Additionally, you can ask in the chat window for concert or weather information. This data will then be retrieved from the weather and concert-app. Image 3 is the UI to showcase the functionality.
If you want to try it locally, then you could install Redis locally or better use Docker. Another option is to start a vagrant box with Redis. I have prepared a Redis vagrant box for that use case. If you want to use that one, go to folder vms/redis of my repository on GitHub and execute ‘vagrant up’. The source code for the chat-app is in source/chat-app.
If you have still eureka, the weather-app and the concert-app running from the last example, start now additionally the chat app by executing ‘./gradlew bootRun’. Login as ‘homer’ with password ‘beer’ on http://localhost:8080. You can now enter messages which will be stored in Redis. The session itself is stored in Redis as well and you can call other services indirectly if they are available e.g. by submitting ‘/weather springfield’ in the text field of the app. If the services are not available, the command will not work, but everything else will still continue to work.
Getting rid of the local file persistence for existing apps can be difficult. One solution is to use an object storage like AWS S3. There are other object storage solutions which use the same API. Take a look into Minio (https://github.com/minio/minio) if you want to install a S3 like storage locally. Instead of writing or reading a file, you can use an AWS S3 java client to write or read files from buckets. The next code snippet (Listing 33) should illustrate the idea, but I will not use it in my examples.
Listing 33
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| @RestController @RequestMapping ( "/filehandling" ) public class SampleS3Controller { private AmazonS3Client client; private String bucketName; @RequestMapping (value = "upload" , method = RequestMethod.POST) public void uploadFile( @RequestParam ( "name" ) String name, @RequestParam ( "file" ) MultipartFile file) { try { ObjectMetadata objectMetadata = new ObjectMetadata(); objectMetadata.setContentType(file.getContentType()); client.putObject( new PutObjectRequest(bucketName, name, file.getInputStream(), objectMetadata)); } catch (Exception e) { ... } } } |
Execute the app as one or more stateless processes in PCF
Let us push our chat server to PCF and connect it to a Redis database (which is available as service in PCF) to use it with our already deployed weather-app and concert-app. The name of the Redis service must be Redis or the binding data has to fit to the scheme, as seen here.
Listing 34 shows how to do it in PCF.
Listing 34
1
2
3
4
5
6
7
8
| # artefact is in artefacts cf push mgruc-pcf-chat-app -p chat-app- 0.0 . 1 .jar --random-route --no-start cf bind-service mgruc-pcf-chat-app mgruc-service-registry cf create-service rediscloud 30mb redis # in case of an internal installation it would be: cf create-service p-redis shared-vm redis cf bind-service mgruc-pcf-chat-app redis cf start mgruc-pcf-chat-app cf app mgruc-pcf-chat-app |
You can now login to the chat service (url you can get from ‘cf app mgruc-pcf-chat-app’) and use it. It will store the sessions and messages in the Redis database and use the weather and concert service.
Execute the app as one or more stateless processes in Kubernetes
Let us deploy Redis and out chat service. Additionally we will make the service available from outside. We will deploy a simple Redis setup for our demo (Listing 35, see redis-master.yaml). In a real life example you would probably define a clustered setup.
Listing 35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/google_containers/redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 |
Let’s deploy it.
Listing 36
1
2
| kubectl create -f redis-master.yaml kubectl expose deployment redis --type= "LoadBalancer" |
And now we deploy the chat app. I’ve added the connection information (using env variables) to the application properties for this. Apart from that our Spring Boot app is the same like in PCF or locally (Listing 37, see chat-app-v1.yaml).
Listing 37
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| apiVersion: extensions/v1beta1 kind: Deployment metadata: name: chat-app spec: replicas: 1 template: metadata: labels: app: chat-app spec: containers: - name: chat-app image: mgruc/chat-app:v1 ports: - containerPort: 8080 |
Let’s now deploy our chat app and make it available in Listing 38.
Listing 38
1
2
3
| kubectl create -f chat-app-v1.yaml kubectl expose deployment chat-app --type= "LoadBalancer" kubectl get service chat-app |
Open it in your browser http://:8080/. I will skip the file storage example here. Since Kubernetes is based on Docker, the files can be mounted to the host if needed or the S3 solution can be applied as well. The source for the Kubernetes examples is available under /source/docker-… in my repo.
Run admin/management tasks as one-off processes
This useful principle is one more time originated in the 12 factor apps manifest (XII. Admin processes). Admin tasks are often not part of the pipeline, because the administration is in a lot of companies done by another department. Software changes are in the most companies versioned and running through a pipeline.
Database changes, config changes, file cleanups are treated as exceptions, even if the risk is often bigger than a change in software. The exception I see the most are database changes. Some companies have dedicated DBAs which are not requested to commit their changes in the same pipelines. They are handled in a different way and not rolled out with the application change. That’s why I will just give one example how database changes can be integrated into the software.
Flyway is one of the libraries which is designed to execute versioned database changes. Spring Boot will automatically autowire Flyway with its DataSource and invoke it on startup if the dependency is given. Flyway supports versioned database migrations (unique version – applied exactly once) or repeatable migrations (re-applied every time their checksum changes). Let’s say you would have a folder named db/migration in you classpath (e.g. resource folder of application), then you could add sql files in that folder with the names ‘V1__lorem.sql’, ‘V2__ipsum.sql’ and so on. Flyway would execute all sql files which were not yet executed on the database in the given order (V[order]__[any name]) if you deploy an application.
Treat logs as event streams
If you accept that your application might be started on a different host any time (see autofailover, autoscaling, …), that it should be easy to scale it, than local file systems are not reliant. That means writing your logs to files is not reliant and your logging capabilities has to scale as well. The consequence is that you understand your logs as streams (12 factor apps – XI. Logs) and pipe your logs as a stream to different channels.
Treat logs as event streams in PCF
Cloud foundry collects the system out and all requests into one big stream. You can link yourself to the stream by the command ‘cf logs mgruc-pcf-chat-app [–recent]’. A better solution is to stream your PCF application logs to your external logging solution. PCF supports log forwarding compatible to syslog out of the box. Let’s say we want to forward our logs from the pcf-chat-app to an external logstash agent, then we could follow Listing 39.
Listing 39
1
2
3
| cf cups my-logstash -l syslog: //<ip_of_your_logstah_host>:<your_logstah_port> cf bind-service mgruc-pcf-chat-app my-logstash cf restage mgruc-pcf-chat-app |
The same binding will work with splunk, papertrail or other services which support syslog.
Treat logs as event streams in Kubernetes
There are several ways to access your logs in Kubernetes. One way is to use the cli e.g. ‘kubectl logs deployment/chat-app’. You can use kubectl logs to retrieve logs but you want to stream your logs to an external solution. The out-of-the box support for Kubernetes is not as straight forward as in PCF, you have to decide yourself on a strategy and implement it.
Some of your options are:
- Push logs directly to a backend from within your application, means you could use a log appender which sends directly to splunk or other tools.
- You can use a streaming sidecar container, means you link the logs from you application container is a second container. The second container contains a log-forwarder.
- You can use a log forwarder on the host and mount the logfiles from the Docker container to the host.
Dynamic Tracing and monitoring
If you have hundreds of services which call each other and your software is changing dynamically and fast, then you want to know where you lose your time in order to troubleshoot latency problems and strange behavior. That means additionally to the monitoring of the services itself, you have to add monitoring for the network of microservices. There are several tools and cloud provider like App Dynamics, New Relic and Dynatrace available to monitor distributed apps, only to name a few. Everyone can easily fill an article.
I will show instead two open source options:
- Hystrix with Turbine
- Zipkin
I introduced Hystrix already as circuit breaker. Additionally Hystrix is able to export some metrics. Hystrix needs this information in order to open and close the circuits (number of failed requests, response times, …) and the embedded Hystrix dashboard is able to visualize them for each service.
If you have hundreds of services and thousands of circuit breakers, you don’t want to monitor the different dashboards of the different services instead you want to collect them and visualize them as one dashboard. This is when Turbine comes in handy. It’s also open source from Netflix. Turbine is designed to aggregate Hystrix streams to a single dashboard.
The second framework I want to introduce is Zipkin. Zipkin is a distributed tracing system, that means Zipkin visualizes the trace of requests over several services. Zipkin is available as self running jar based on spring boot. You can easily start it locally as jar or as Docker container (Listing 40).
Listing 40
1
2
3
4
5
| # as jar wget -O zipkin.jar 'https://search.maven.org/remote_content?g=io.zipkin.java&a=zipkin-server&v=LATEST&c=exec' java -jar zipkin.jar # or as Docker docker run -d -p 9411 : 9411 openzipkin/zipkin |
Or you can create your own spring boot application and add the annotation @EnableZipkinStreamServer to the main class to make it to a Zipkin server. Once you’ve started Zipkin, you can browse to http://your_host:9411 to find traces. If you now want to add your service to Zipkin, you have to add a sampler to send the information to the Zipkin server.
The easiest way (but not most performant) to do this is probably the AlwaysSampler (Listing 41).
The easiest way (but not most performant) to do this is probably the AlwaysSampler (Listing 41).
Listing 41
1
2
3
4
| @Bean public AlwaysSampler defaultSampler() { return new AlwaysSampler(); } |
Dynamic Tracing and monitoring in PCF
Let’s try it in PCF. You can use my pre-build artefacts (see Listing 42).
Listing 42
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| # folder artefacts # new zipkin service cf push mgruc-pcf-zipkin -p zipkin.jar --random-route --no-start cf bind-service mgruc-pcf-zipkin mgruc-service-registry # new version of apps which export data to zipkin and the hystrix dashboard cf push mgruc-pcf-chat-app -p chat-app-with-zipkin- 0.0 . 1 .jar --random-route --no-start cf push mgruc-pcf-weather-app -p weather-app-with-zipkin- 0.0 . 1 .jar --random-route --no-start cf push mgruc-pcf-concert-app -p concert-app-with-zipkin- 0.0 . 1 .jar --random-route --no-start # attach hystrix-turbine dashboard cf create-service p-circuit-breaker-dashboard standard mgruc-circuit-breaker-dashboard cf bind-service mgruc-pcf-chat-app mgruc-circuit-breaker-dashboard cf bind-service mgruc-pcf-weather-app mgruc-circuit-breaker-dashboard cf bind-service mgruc-pcf-concert-app mgruc-circuit-breaker-dashboard # attach to zipkin cf app mgruc-pcf-zipkin cf set-env mgruc-pcf-weather-app SPRING_ZIPKIN_BASE-URL http: //<zipkin url> cf set-env mgruc-pcf-concert-app SPRING_ZIPKIN_BASE-URL http: //<zipkin url> cf set-env mgruc-pcf-chat-app SPRING_ZIPKIN_BASE-URL http: //<zipkin url> # start the apps cf start mgruc-pcf-zipkin cf start mgruc-pcf-weather-app cf start mgruc-pcf-concert-app cf start mgruc-pcf-chat-app cf apps |
If you now use the chat app for storing messages and request concert and weather data, Zipkin will visualize your dependencies and latencies, Eureka will show you all services and the Hystrix dashboard will help you to identify problems in the communication. Image 6 and 7 are screenshots of the PCF service discovery and the PCF Hystrix Dashboard.
If you want to have a look into the code which enables Zipkin, then check weather-app-with-zipkin folder, the concert-app-with-zipkin folder and chat-app-with-zipkin folder in my repo.
Reference:
Source:
Comments
Post a Comment