Thursday, October 13, 2011

Spring 3.1.0.RC1 - JPA EntityManagerFactory bootstrapping without persistence.xml

This will be a quick post. Today Spring 3.1.0.RC1 has been released. See here. This article will take a quick look on one of the enhancements in Spring 3.1:

3.1.12 JPA EntityManagerFactory bootstrapping without persistence.xml
In standard JPA, persistence units get defined through META-INF/persistence.xml files in specific jar files which will in turn get searched for @Entity classes. In many cases, persistence.xml does not contain more than a unit name and relies on defaults and/or external setup for all other concerns (such as the DataSource to use, etc). For that reason, Spring 3.1 provides an alternative: LocalContainerEntityManagerFactoryBean accepts a 'packagesToScan' property, specifying base packages to scan for @Entity classes. This is analogous to AnnotationSessionFactoryBean's property of the same name for native Hibernate setup, and also to Spring's component-scan feature for regular Spring beans. Effectively, this allows for XML-free JPA setup at the mere expense of specifying a base package for entity scanning: a particularly fine match for Spring applications which rely on component scanning for Spring beans as well, possibly even bootstrapped using a code-based Servlet 3.0 initializer.

Source: New Features and Enhancements in Spring 3.1

Here's my existing configuration:


This requires an extra META-INF/persistence.xml to make it work:


How do we improve this?

Based on the docs, we can remove the persistence.xml altogether. But how do we declare the extra configurations related to our ORM? And how does the entityManagerFactory know where our entities are?

Here's how:
1. Delete the META-INF/persistence.xml

2. Declare a packagesToScan property

3. Declare a jpaPropertyMap

Here's the final configuration:


If you want to try this with an actual project, you either create a new one, or you can play with my demo project Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX which has a Github repo.

That's all folks!
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring 3.1.0.RC1 - JPA EntityManagerFactory bootstrapping without persistence.xml ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Sunday, October 9, 2011

Tomcat: Clustering and Load Balancing with HAProxy under Ubuntu 10.04 - Part 2

Review


In the previous section, we've set-up a simple environment containing a clustered Tomcat instances and HAProxy for load balancing. In this section, we will test our load balancing environment and explore various strategies to improve our setup.

Table of Contents


  1. Setting-up the Environment
    • Download Tomcat
    • Configure Tomcat
    • Run Tomcat
    • Download HAProxy
    • Configure HAProxy
  2. Load Balancing
    • Default Setup
    • Sharing Sessions
    • Configure Tomcat to Share Sessions
    • Retest Session Sharing
    • Session Sharing Caveat
    • Sharing Sessions
  3. HAProxy Configuration
    • Configuration File
    • Logging

Load Balancing


Default Setup

After downloading and installing Tomcat and HAProxy, we will now test the default load balancing

Open a browser and visit the following link:
http://localhost/

It should display the following page:


Notice we did not indicate any port. By default the browser will use port 80 for HTTP requests. The previous link is equivalent to:
http://localhost:80/

This means HAProxy is able to redirect our requests from port 80 to the Tomcat instances. If we check the HAProxy logs, we can see that the requests is redirected to tomcat1:
localhost haproxy[4530]: 127.0.0.1:42377 [06/Oct/2011:07:50:57.054] http-in servers/tomcat1 0/0/0/2/28421 200 25030 - - --NN 0/0/0/0/0 0/0 "GET / HTTP/1.1"

Let's pretend that tomcat1 has failed by shutting it down manually. To shutdown tomcat1, run the following command:
sudo /usr/local/tomcat-7.0.21-server1/bin/shutdown.sh

HAProxy's stats page should display that tomcat1 is dead. To display the stats page, open a browser, and visit the following link:
http://localhost/admin?stats


Now, let's check if we can still access the main page. Open a browser and visit the previous link:
http://localhost/

You should see the following page:


Notice the web page is still available! It means HAProxy is able to redirect our request from an inactive server to an active one.

If we check the HAProxy logs, it shows that our request has been redirected to tomcat2:
localhost haproxy[4530]: 127.0.0.1:56619 [06/Oct/2011:07:58:17.761] http-in servers/tomcat2 17/0/0/2/27825 200 13075 - - --NN 0/0/0/0/0 0/0 "GET / HTTP/1.1"

Let's turn off tomcat2. This means all our servers are down! Visit the localhost page again, and we should get the following response:


The web page is down! HAProxy's stat page shows that the Backend servers are down:


Sharing Sessions


If we are serving a web page that holds session information we assume that information is still available regardless if tomcat1 or tomcat2 is down.

Imagine a shopping cart. You're selecting items in a page. Behind the scenes the server you're working at has crashed. You expect the original shopping cart information is still intact. Otherwise, you'll start again from scratch.

Let's verify this behavior by examining the sample applications within the Tomcat examples directory. These examples are built-in to Tomcat when we initially installed it.

Before we proceed, please make sure your environment is as follows:
ServerStatus
Tomcat 1Down
Tomcat 2Up

The open up a browser, and visit the following page:
http://localhost/examples/jsp/jsp2/simpletag/hello.jsp

This is what you should see:


This application is one of the built-in examples included in the Tomcat installation. In my computer, this application resides at:
/usr/local/tomcat-7.0.21-server1/webapps/examples

I'm going to examine the session ID returned by this page by using Google Chrome's Developer Tools (see http://code.google.com/chrome/devtools/). Here's an actual screenshot:


The session ID reads 697E0084595762C85952E2AFEB7B56FD. If you're running this guide with an actual Tomcat, your session ID will vary.

Now, let's change our environment. Before we proceed, make sure this is your environment:
ServerStatus
Tomcat 1Up
Tomcat 2Down

Open up a browser, and visit the following page again:
http://localhost/examples/jsp/jsp2/simpletag/hello.jsp

It should display the same page still. Let's examine the session ID returned by this second request:


The session ID reads E501914ABC8DD2F2EC82A4B5123B51AA in the Response Header section; whereas it reads 697E0084595762C85952E2AFEB7B56FD in the Request Header section.

If we refresh the page, the Request Header now has E501914ABC8DD2F2EC82A4B5123B51AA and the original session ID 697E0084595762C85952E2AFEB7B56FD is gone forever. This means when we shutdown tomcat2, the session ID is not transferred from tomcat1.

Although we're seeing the same page, we're actually operating in different sessions. Imagine if this is a shopping cart. Suddenly, all your orders are gone! Time to file a support ticket!

How do we resolve this issue? The solution is simple. Enable session sharing. How? We follow the instructions given in the Apache Tomcat 7' Clustering/Session Replication HOW-TO reference.

Configure Tomcat to Share Sessions


The key to enable session sharing is to declare two XML elements: one in your application's web.xml (1) and the other in Tomcat's server.xml (2):

1. <distributable>
2. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">

Let's declare those two XML elements in our "Hello World SimpleTag Handler" example. It's important that we declare those two elements in all our Tomcat instances where our application resides.

Let's do that now.

1. Go to your Tomcat 1's directory, and find the examples directory. In my computer, the directory is:
/usr/local/tomcat-7.0.21-server1/webapps/examples

2. Under WEB-INF, open web.xml and declare a <distributable> element. Place it just above the filter elements. See screenshot below:


3. Next, edit the server.xml. In my computer, this translates to
/usr/local/tomcat-7.0.21-server1/conf

Declare a <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"> element.
Place this element just below the Engine element. See screenshot below:


We've configured tomcat1. Now, configure tomcat2 by repeating the same steps.

Retest Session Sharing


After configuring both Tomcat instances, we need to restart them so that the changes will take effect.

Now, update your environment, and make sure it follows this scenario:
ServerStatus
Tomcat 1Down
Tomcat 2Up

Open a browser and visit the following page:
http://localhost/examples/jsp/jsp2/simpletag/hello.jsp

Using Google Chrome's Developer Tools, the session ID is 16E9D9B83CFF02196DBC794CE3E0AB3D



Update your environment, and make sure it follows this scenario:
ServerStatus
Tomcat 1Up
Tomcat 2Down

Again, open a browser and visit the following page again:
http://localhost/examples/jsp/jsp2/simpletag/hello.jsp

Using Google Chrome's Developer Tools, the session ID reads 16E9D9B83CFF02196DBC794CE3E0AB3D


Notice we have the same session ID!. This means our session information has been successfully retained and reused across our Tomcat instances.

Session Sharing Caveat


Everything seems fine now. However, I want to emphasize an important requirement with session sharing. To understand what I meant, let's run another built-in example application.

Open a browser, and visit the following page:
http://localhost/examples/jsp/sessions/carts.html

It should display a shopping cart:


Try adding an item. Immediately, an exception will be thrown:


The exception reads:
java.lang.IllegalArgumentException: setAttribute: Non-serializable attribute cart

Why are we getting this error? If we study the Tomcat 7 reference for clustering, we will find the following information:

To run session replication in your Tomcat 7.0 container, the following steps should be completed:

- All your session attributes must implement java.io.Serializable
- Uncomment the Cluster element in server.xml
- Make sure your web.xml has the element

Source: http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html

The reference is telling us to ensure that our session attributes are serializable! Based on the error message, our cart is not serializable.

Let's examine the source code of this cart class. You can find the source code within your Tomcat examples folder. In my computer, this translates to:
/usr/local/tomcat-7.0.21-server1/webapps/examples/WEB-INF/classes/sessions/DummyCart.java

Here's the source code:


To make this class serializable, we just implement the java.io.Serializable class as follows:


You can compile this by yourself. Or you can download my patched version of DummyCart.class (click here to download). To use this patch, follow these steps:

  1. Go to your Tomcat examples directory. In my computer this translates to:
    /usr/local/tomcat-7.0.21-server/webapps/examples/WEB-INF/classes/sessions/
  2. Replace the old DummyCart.class with the patched version. Alternatively, rename the old one instead of deleting it.
  3. Repeat the previous steps to all your Tomcat instances.
  4. Restart all Tomcat instances.
Let's revisit our shopping cart. Try adding an item. Notice you're now able to add an item without any exceptions.


If we check the HAProxy logs, our request went to tomcat2
http-in servers/tomcat2 890/0/0/6/30760 304 3109 - - --NN 0/0/0/0/0 0/0 "GET /examples/jsp/sessions/carts.html HTTP/1.1"

The session ID is 84061AA7EF1EF6CADE7489113700481E as shown in the Google Developer tool (I have omitted the screenshot).

Let's turn off tomcat2 and add a new item in the shopping cart.

You should be able to add a new item:

HAProxy logs show that we're now using tomcat1 instance:
http-in servers/tomcat1 0/0/0/35/34234 200 2924 - - --IN 0/0/0/0/0 0/0 "GET /examples/jsp/sessions/carts.jsp?item=Switch+blade&submit=add HTTP/1.1"

Our session ID is still 84061AA7EF1EF6CADE7489113700481E as shown in the Google Developer tool:


Try switching servers off and on. Just make sure there's at least one server running. Notice the session ID never changes.

Next Section


We've successfully implemented load balancing using HAProxy and session sharing among our Tomcat instances. We've learned how to troubleshoot session IDs by using Google Chrome's Developer Tools. In the next secion, we will configure HAProxy logging so that we can easily troubleshoot problems arising from this tool. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Tomcat: Clustering and Load Balancing with HAProxy under Ubuntu 10.04 - Part 2 ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Tomcat: Clustering and Load Balancing with HAProxy under Ubuntu 10.04 - Part 1

Introduction


In this article we will explore how to setup a simple Tomcat cluster and load balancing using HAProxy. Our environment will consists of two Tomcat (latest version) instances running under Ubuntu Lucid (10.04 LTS). We will use sample applications from the built-in Tomcat package to demonstrate various scenarios. Later in the tutorial, we will study in-depth how to configure HAProxy and how to setup logging.

What is HAProxy?
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing - http://haproxy.1wt.eu/

What is Tomcat?
Apache Tomcat is an open source software implementation of the Java Servlet and JavaServer Pages technologies... Apache Tomcat powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations - http://tomcat.apache.org/

Table of Contents


  1. Setting-up the Environment
    • Download Tomcat
    • Configure Tomcat
    • Run Tomcat
    • Download HAProxy
    • Configure HAProxy
  2. Load Balancing
    • Default Setup
    • Sharing Sessions
    • Configure Tomcat to Share Sessions
    • Retest Session Sharing
    • Session Sharing Caveat
    • Sharing Sessions
  3. HAProxy Configuration
    • Configuration File
    • Logging

Frequently Asked Questions (FAQ)


Q: Why is this tutorial in Linux instead of Windows?
A: By default the source code and pre-compiled binaries for HAProxy is tailored for Linux/x86 and Solaris/Sparc.

Q: Why Ubuntu 10.04 instead of another Linux distribution?
A: My local machine is running Ubuntu 10.04 LTS.

Q: How do I install HAProxy in Windows?
A: You can install it via Cygwin. Check this link for more info.

An Overview


Before we start with actual configuration and development, let's get a visual overview of the whole setup. The diagram below depicts our simple load balancing architecture and the typical flow of data:

1. A client visits our website.
2. HAProxy receives the request and performs load balancing.
3. Request is redirected to a Tomcat instance.
4. Response is returned back to HAProxy and back to the client.


Notice in the backend we are sharing the same IP address (127.0.0.1), the localhost. This is useful for testing purposes, but in production we will normally assign each server in its own machine with its own IP address.

Since we have three servers that share the same IP address, we have to assign different ports to each as follows:
ServerPort
HAProxy80
Tomcat 18080
Tomcat 28090

The client should only have access to the IP address and port where HAProxy resides. If we let the client bypass HAProxy by directly connecting to any of the Tomcat instances, then we have defeated the purpose of load balancing.

Requirements


When this article was written, the environment and tools I'm using are as follows:
NameVersionOfficial Site
HAProxyStable 1.4.18http://haproxy.1wt.eu/
Tomcat7.0.22http://tomcat.apache.org/
Ubuntu10.04http://www.ubuntu.com/

The HAProxy and Tomcat versions are the latest stable versions as of this writing (Oct 9 2011). This tutorial should work equally well on Tomcat 6. For the operating system, I recommend Ubuntu 10.04 because that's where I tested and setup this tutorial. In any case, you should be able to apply the same steps to your favorite Linux distro. For Windows users, use Cygwin instead (see FAQs).

Setting-up the Environment

To ensure we're on the same page, I'm providing a walkthrough for configuring and installing of Tomcat and HAProxy. We will test a basic setup to verify that we have setup our environment correctly.

Download Tomcat

To download Tomcat visit its official page. Alternatively, you can visit this link directly: http://tomcat.apache.org/download-70.cgi

Select a core binary distribution. For my system, I opted for the zip version (the first option).

Extract the download to your file system. In my case, I extracted the zip file to /usr/local and rename the extracted folder to tomcat-7.0.21-server1.

Copy and paste this folder to the same location /usr/local and rename the folder to tomcat-7.0.21-server2. The final result should be similar to the following:

Your Tomcats are installed.

Configure Tomcat


If we examine the server.xml inside the Tomcat conf folder, we will discover that Tomcat uses the following ports by default:

Tomcat Default Ports
ElementPort
Shutdown8005
HTTP Connector8080
AJP Connector8009

We have two Tomcat instances. If we run both, we'll encounter a port conflict since both instances are using the same port numbers. To resolve this conflict we will edit one of the server.xml files. In our case, we will choose Tomcat instance 2.

Go to tomcat-7.0.21-server2/conf and open-up server.xml. Find the following lines and replace them accordingly:



Save the changes. At the end, your Tomcat instances should be configured as follows:

Tomcat 1 & 2 Ports
Tomcat 1 PortsTomcat 2 Ports
Shutdown80058105
HTTP Connector80808180
AJP Connector80098109

Run Tomcat

After configuring our Tomcat installations, let's run them and verify that they're running according to the specified ports.

Tomcat 1
Since I'm using Ubuntu, I can run Tomcat 1 by issuing the following command in the terminal:
sudo /usr/local/tomcat-7.0.21-server1/bin/startup.sh

If in case you get a permission error, make the startup.sh executable first. To verify that Tomcat 1 is running, open-up a browser and visit the following link:
http://localhost:8080

Here's the resulting page:


Tomcat 2
To run Tomcat 2, follow the same steps earlier. This time we'll execute the following command:
sudo /usr/local/tomcat-7.0.21-server2/bin/startup.sh

Open another browser and visit the following link:
http://localhost:8180

Here's the resulting page:


We've successfully setup two Tomcat instances. Next, we will download and setup HAProxy.

Download HAProxy


To download HAProxy, visit its official page and download either the pre-compiled binaries or the source. Alternatively, you can install via apt-get (however if you want the latest version, you might need to tinker with sources.list to update your sources).

For this tutorial, we will build and compile from the source (which I believe is faster and simpler).

Open up a terminal and enter the following commands:


This should download the latest HAProxy, extract, and install it. If you get a permission error, make sure to prepend a sudo in each command. If you have difficulty installing from the source, I suggest you do some Googling on this topic. There are plenty of resources on how to install HAProxy from the source (albeit some are outdated though but may still apply).

After HAProxy has been installed, verify that it's indeed installed! Open up a terminal and type the following command: haproxy.

You should see the following message:

Configure HAProxy


In order for HAProxy to act as a load balancer, we need to create a custom HAProxy configuration file where we will declare our Tomcat servers.

I'll present you first a basic configuration to jump-start our exposure to HAProxy. In part 3, we'll study this configuration and explain what's happening per line.

I created a configuration file and saved it at /etc/haproxy/haproxy.cfg:


Run HAProxy


After configuring HAProxy, let's verify that it's running and communicating properly with our Tomcat instances.

Open up a terminal and run the following command:
sudo haproxy -f /etc/haproxy/haproxy.cfg

Now open up a browser and visit the following link:
http://localhost/admin?stats

Your browser should show the following page:


Based on this page, tomcat1 and tomcat2 are both down. That's because they are not running yet. Let's run both Tomcat instances, and the stats page should automatically update.

To start tomcat1, run the following command:
sudo /usr/local/tomcat-7.0.21-server1/bin/startup.sh

To start tomcat2, run the following command:
sudo /usr/local/tomcat-7.0.21-server2/bin/startup.sh

Here's the result:


Notice both Tomcats are now ready.

Next Section


We've completed setting-up our environment for Tomcat clustering and load balancing using HAProxy. In the next section, we will explore various load balancing scenarios to learn more about Tomcat and HAProxy. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Tomcat: Clustering and Load Balancing with HAProxy under Ubuntu 10.04 - Part 1 ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Tomcat: Clustering and Load Balancing with HAProxy under Ubuntu 10.04 - Part 3

Review


In the previous section, we've implemented load balancing using HAProxy and session sharing among our Tomcat instances. In this section, we will examine in-depth the HAProxy configuration file and setup its logging facilities.

Table of Contents


  1. Setting-up the Environment
    • Download Tomcat
    • Configure Tomcat
    • Run Tomcat
    • Download HAProxy
    • Configure HAProxy
  2. Load Balancing
    • Default Setup
    • Sharing Sessions
    • Configure Tomcat to Share Sessions
    • Retest Session Sharing
    • Session Sharing Caveat
    • Sharing Sessions
  3. HAProxy Configuration
    • Configuration File
    • Logging

HAProxy Configuration


When it comes to HAProxy configuration, the best source of information is its online documentation at http://haproxy.1wt.eu/#docs. It's one massive text file of technical information though.

Configuration File


Not all information in that document applies to our configuration. Therefore, I have copied the relevant information only and pasted them as comments per line:


Take note of the following parts:
  • frontend http-in: We're telling HAProxy to listen to HTTP requests
  • default_backend servers: We declare a set of backend servers
  • stats uri /admin?stats: This is the URL to the stats page, relative to your hostname
  • stats realm Haproxy\ Statistics: This is the server name you see when you login to the stats page.
  • server tomcat1 127.0.0.1:8080 cookie JSESSIONID check inter 5000: Defines a server. In this case, a Tomcat server. Here we assigned the IP and port number.

HAProxy Logging


Logging is crucial in any serious application, and HAProxy has facilities to log its activities.
However, to setup one requires extra effort because to enable logging in HAProxy we need to know
Linux's logging facilities via the Syslog server and take into account the Syslog implementation in Ubuntu Lucid (10.04).

What is Syslog?
syslog is a utility for tracking and logging all manner of system messages from the merely informational to the extremely critical. Each system message sent to the syslog server has two descriptive labels associated with it that makes the message easier to handle. - Source: Quick HOWTO : Ch05 : Troubleshooting Linux with syslog

To enable logging, we need to:
  • Add a logging facility in haproxy.cfg
  • Add the logging facility to Syslog server

Add a logging facility in haproxy.cfg
Edit the haproxy.cfg file:
sudo gedit /etc/haproxy/haproxy.cfg

And declare the following:


We declared two logging facilities under the global section. Both facilities will send their log output to the Syslog server which is located at 127.0.0.1. The default port is 514. Each logger has its own unique name: local0 and local1.

Why are they named such? These are local facilities defined by the user to log specific deamons (see What is LOCAL0 through LOCAL7 ?).

Remember an optional level can be specified to a filter. Hence, local1 has an extra argument: notice. This means local1 will only capture logs with notice level as opposed to all, i.e. errors, debugs.

Reload haproxy by running the following command:
sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)

This command will not restart HAProxy. It will just reload the configuration file. This is good because you won't be killing active connections. If you get a missing file i.e /var/run/haproxy.pid or other errors, just kill the haproxy process and restart it:
kill -9 #####
where ##### is the process id


Add the logging facility to Syslog server
There are two solutions to achieve this.

Solution #1
a. Run
sudo gedit /etc/rsyslog.conf

And declare the following lines at the end:

b. Restart syslog server by running:
restart rsyslog

Solution #2
Instead of editing directly the rsyslog.conf, we can declare a separate configuration under /etc/rsyslog.d/ directory. If you inspect carefully the rsyslog.conf, you will see the following comments:


This setting will load all *.conf files under /etc/rsyslog.d/ directory.

a. Run
sudo gedit /etc/rsyslog.d/haproxy.conf

And declare the following lines at the end:


b. Restart syslog server by running:
restart rsyslog

Overflowing Logs


We've setup HAProxy logging. We can see the logs in /var/log/haproxy0a.log and /var/log/haproxy1a.log files. However, we also see them in /var/log/syslog.

This is bad because now we have redundant logs that just eats up space. You don't want that syslog to be polluted with HAProxy logs. That's the reason why we've setup a separate logging facility in the first place.

There are two ways to prevent this unwanted overflow:

Solution #1
1. Run
sudo gedit /etc/rsyslog.d/50-default.conf

And search for the following lines (right after the introductory comments):

And change them as follows:

This means local0 and local1 should not overflow to syslog.

b. Restart syslog server by running:
restart rsyslog

Solution #2
1. Run
sudo gedit /etc/rsyslog.conf

And find the following lines:

And change them as follows:

The addition of & ~ will prevent the logs designated to local0 from overflowing to other logging facilities.

Note: If you can't find those lines, maybe you've declared your configuration under /etc/rsyslog.d/haproxy.conf. If yes, follow the same steps.

b. Restart syslog server by running:
restart rsyslog

Rotate Logs


We've setup HAProxy logging. We've isolated the logs from overflowing to syslog. However, there's another problem. The HAProxy logs will soon pile-up and consume precious disk space. Gladly, Linux has a way to schedule and reuse the same lgo file and perform compression.

For more info of log rotation in Linux, please see Quick HOWTO : Ch05 : Troubleshooting Linux with syslog: Logrotate.

Again, there are two ways of handling this requirement:

Solution #1
a. Run
sudo gedit /etc/logrotate.d/haproxy

And add the following lines:

b. Restart syslog server by running:
restart rsyslog

Solution #2
Log rotation with rsyslog from the official rsyslog documentation. This is something I haven't tried yet but if you're willing to experiment, here's the link: http://www.rsyslog.com/doc/log_rotation_fix_size.html. This technique utilizes the output channels.

However, read the following notes:
Output Channels are a new concept first introduced in rsyslog 0.9.0. As of this writing, it is most likely that they will be replaced by something different in the future. So if you use them, be prepared to change you configuration file syntax when you upgrade to a later release.
- http://www.rsyslog.com/doc/rsyslog_conf_output.html

References


The following is a compendium of references that I found interesting to read further:

R: What is LOCAL0 through LOCAL7 ?
L: http://www.linuxquestions.org/questions/linux-security-4/what-is-local0-through-local7-310637/

R: Quick HOWTO : Ch05 : Troubleshooting Linux with syslog
L: http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch05_:_Troubleshooting_Linux_with_syslog

R: rsyslog official site
L: http://www.rsyslog.com/doc/rsyslog_conf.html

R: rsyslog.conf configuration file
L: http://www.rsyslog.com/doc/rsyslog_conf.html

R: UDP Syslog Input Module
L: http://www.rsyslog.com/doc/imudp.html

R: How to keep haproxy log messages out of /var/log/syslog
L: http://serverfault.com/questions/214312/how-to-keep-haproxy-log-messages-out-of-var-log-syslog

R: HAProxy Logging in Ubuntu Lucid
L: http://kevin.vanzonneveld.net/techblog/article/haproxy_logging/

Q: Install and configure haproxy, the software based loadbalancer in Ubuntu
A: http://linuxadminzone.com/install-and-configure-haproxy-the-software-based-loadbalancer-in-ubuntu/

Conclusion


That's it. We've completed our study of HAProxy and Tomcat clustering. We've learned how to setup, configure load balancing, and handle failover. We've also learned the important points when enabling session sharing. We've also studied HAProxy's configuration and logging facilities.

If you want to learn more about web development and integration with other technologies, feel free to read my other tutorials in the Tutorials section.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Tomcat: Clustering and Load Balancing with HAProxy under Ubuntu 10.04 - Part 3 ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Friday, September 16, 2011

Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 1

Introduction

In this article we will explore how to integrate MySQL, MongoDB, and RabbitMQ in a Spring MVC application using Spring Data JPA, Spring Data MongoDB, and Spring AMQP projects respectively. Then we'll add AJAX in the presentation layer using jQuery. For presenting tabular data, we will explore two jQuery plugins: DataTables and jQgrid. The jQgrid version will use Spring Cache support to boost performance. The ultimate purpose of this article is to demonstrate how to integrate these different projects in a single Spring MVC application.

Our application is a simple event management system for performing CRUD operations. When we talk of events, we're referring to the colloquial definition, i.e wedding event. The application's main data will be stored in a MySQL database using Hibernate as our ORM. We will use Spring Data JPA to simplify the data access layer. This means we don't need to implement our own data access objects.

Our application is also capable of broadcasting these events as simple text messages. To broadcast these events, we will use RabbitMQ as our messaging broker, and Spring AMQP for sending and receiving of messages.

No application is free from errors. Therefore, we've added error persistence capability using MongoDB. Our application has logging capabilities already, but we want the ability to analyze these errors later. Therefore, this is where MongoDB plays in. To simplify the data access layer, we will use Spring Data MongoDB, similar with Spring Data JPA.

Table of Contents

  1. Event Management
  2. Messaging support
  3. Error persistence
  4. Build and deploy

Frequently Asked Questions (FAQs)

  1. Q: What is JPA?
    A: See http://en.wikipedia.org/wiki/Java_Persistence_API
  2. Q: What is MongoDB?
    A: See http://www.mongodb.org
  3. Q: What is RabbitMQ?
    A: See http://www.rabbitmq.com
  4. Q: What is Spring Data?
    A: See http://www.springsource.org/spring-data
  5. Q: What is Spring Data - JPA?
    A: See http://www.springsource.org/spring-data/jpa
  6. Q: What is Spring Data - MongoDB?
    A: See http://static.springsource.org/spring-data/data-document/docs/current/reference/html
  7. Q: What is Spring AMQP?
    A: See http://www.springsource.org/spring-amqp
  8. Q: What is jQuery?
    A: http://jquery.com/
  9. Q: What is AJAX?
    A: http://en.wikipedia.org/wiki/Ajax_(programming)
  10. Q: What is Ehcache?
    A: http://ehcache.org/

Required Tools and Services

The following tools and services are essential in building and running our project:
  1. MySQL
  2. The application's main data will be stored in MySQL. If you don't have one yet, download and install one by visiting the MySQL home page, or use a pre-packaged setup via XAMPP or Wamp.
    • MySQL: http://www.mysql.com/downloads/
    • Wamp: http://www.wampserver.com
    • XAMPP: http://www.apachefriends.org
  3. RabbitMQ
  4. We will publish and listen for messages using RabbitMQ. If you don't have one yet, download and install one by visiting the RabbitMQ home page.
    • http://www.rabbitmq.com/download.html
  5. MongoDB
  6. We will persist errors using MongoDB. If you don't have one yet, download and install one by visiting the MongoDB home page.
    • http://www.mongodb.org/downloads
  7. Maven
  8. We will use Maven to build our project. If you don't have one yet, download and install one by visiting the Maven home page. You can also use SpringSource Tool Suite (STS), an Eclipse-powered IDE, which has Maven pre-installed.
    • Maven: http://maven.apache.org/download.html
    • STS: http://www.springsource.com/developer/sts

Screenshots

Before we dive into development, it's better if we first visualize what we will be developing.






Live Deployment in the Cloud

The best way to interact with the application is with a live deployment. Therefore, I have taken extra steps to deploy the project in the cloud via the excellent open platform Cloud Foundry. To access the project, visit http://spring-mysql-mongo-rabbit.cloudfoundry.com/

Project Structure

To see how we're going to structure the project, take a look at the following screenshot:


The project's source code is available at GitHub. You can access the project's repository at spring-mysql-mongo-rabbit-integration

Project Dependencies (pom.xml)

The pom.xml contains the project's dependencies. It's a long list but I will only show the relevant entries. As of Sept 17 2011, we're using the latest versions.

What is POM?
POM stands for "Project Object Model". It is an XML representation of a Maven project held in a file named pom.xml. See http://maven.apache.org/pom.html

To see the pom's entire contents, please check the attached Maven project at the end of this tutorial.

Development

The project will be divided into four parts:
  1. Event Management
  2. Messaging support
  3. Error persistence
  4. Build and deploy

A. Event Management

The Event Management is the core function of the system. And we're specifically referring to the CRUD operations. We will divide this stage in four sub-parts:
  1. Event Management
    • A1. Domain
    • A2. Service
    • A3. Controller
    • A4. View
      1. DataTables view
      2. jQgrid view
    • A5. Crosscutting Concerns

A1. Domain

The domain contains a simple Event class. By marking it with @Entity the class will be persisted in a relational database. Notice the class has validation constraints added. For example, the name field cannot be null.

Special attention is given to the date field. We've annotated it with @DateTimeFormat, a Spring conversion annotation (see Spring Reference 5.6.2.1 Format Annotation API) to help us serialize and deserialize JSON dates in the presentation layer.

Event.java


A2. Service

Before we start developing the service layer we need to create first the data access objects (DAOs) manually. We do this for every new project. Through time it becomes repetitive and tedious. Gladly, Spring has a solution: the Spring Data project.

Spring Data's purpose is to provide us ready-made data access objects for various database sources. All we need to do is extend an interface and we're good to go. This means we can start writing the service immediately (if needed).

Spring Data is not a single project, but rather it's composed of many different projects specifically catering to different types of databases (see http://www.springsource.org/spring-data). Since we're using MySQL and Hibernate we will use the Spring Data JPA project.

IEventRepository.java


IEventService.java


EventService.java


A3. Controller

We have a single controller, EventController, whose main purpose is to display the Event page. It also has methods for handling CRUD operations that are ultimately delegated to the EventService.

EventController.java


A4. View

I've provided two views for displaying Event records: a DataTables version and a jQgrid version. The purpose of having two views is to demonstrate Spring's flexibility and teach various ways of presenting tabular data.

To proceed, please choose your preferred view (a specific guide will be presented):
  1. DataTables view
  2. jQgrid view

JMeter Test

The main difference between these two views are the addition of cache to the jQgrid controller. To appreciate the difference, I've provided JMeter tests to gauge each controller's performance. Take note that this tests have nothing to do with the DataTables or jQgrid plugin. The tests are written so that it will only query the controllers without generating the presentation layer.

Small dataset (9 records):

Large dataset (4300+ records):

Notice in small datasets the difference in performance is almost trivial. However, in large datasets the difference is phenomenal. Adding a cache greatly improves the application's performance (of course this assumes that the same request will be called multiple times).

Click here to download the JMeter Test. To run this test, make sure to run only one test, i.e do the DataTables first, then the jQgrid, or vice versa.

What is JMeter?
Apache JMeter is open source software, a 100% pure Java desktop application designed to load test functional behavior and measure performance. Source: http://jakarta.apache.org/jmeter/

A5. Crosscutting Concerns

As mentioned earlier, no application is free from errors. To facilitate troubleshooting, a logging mechanism needs to be added. Typically we would place the logging mechanism across our existing classes. This is a crosscutting concern.

What is a crosscutting concern?
In computer science, cross-cutting concerns are aspects of a program which affect other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and can result in either scattering (code duplication), tangling (significant dependencies between systems), or both.

For instance, if writing an application for handling medical records, the bookkeeping and indexing of such records is a core concern, while logging a history of changes to the record database or user database, or an authentication system, would be cross-cutting concerns since they touch more parts of the program. See http://en.wikipedia.org/wiki/Cross-cutting_concern

To implement a clean way of logging our application, we will take advantage of Spring's CustomizableTraceInterceptor support as seen in the Spring Data JPA reference (Appendix B. Frequently asked questions).

By default CustomizableTraceInterceptor uses trace level for logging. If we want to change the logging level, we need to extend this class and override the writeToLog() method. We have created a sub-class TraceInterceptor and change the logging level to debug

TraceInterceptor.java


After creating the class, we declare it as a bean in the XML configuration. Notice we've configured the entry and exit signature patterns and the pointcut expressions to match. In other words, this will log all services and specific controllers.

trace-context.xml


Next Section

In the next section, we will add messaging support using RabbitMQ and Spring AMQP. To proceed to the next section, click here.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 1 ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 3

Review

In the previous two sections, we've managed to create the core Event management system and integrated RabbitMQ messaging using Spring AMQP. For performing CRUD operations, we've utilized the Spring Data JPA project. In this section, we will integrate MongoDB and add error persistence.

Where am I?

Table of Contents

  1. Event Management
  2. Messaging support
  3. Error persistence
  4. Build and deploy

Error Persistence

It might be odd why we need to record errors in the application. Isn't the default logging feature enough? That depends. But for the purpose of this tutorial, we are doing it to explore MongoDB integration. There are also advantages with this approach. We can easily query and analyze our application's errors and provide a statistical view.

It's interesting that logging is the first use case listed in the MongoDB site and statistical analysis as the last well-suited use case (see MongoDB Use Cases).

We'll divide this page into five sections:
  1. C. Error Persistence
    • C1. Domain
    • C2. Service
    • C3. Aspect
    • C4. Controller
    • C5. View

C1. Domain

The domain contains a simple ErrorLog class. At the class head we've added @Document, a Spring Data MongoDB annotation, for efficiency reasons, and @Id to mark the field id as the class's identifier. If you scrutinize the application's domain classes, notice the @Id annotation in the Event.java (earlier) is from the javax.persistence package; whereas the @Id in ErrorLog.java is from the org.springframework.data.annotation package.

ErrorLog.java


C2. Service

Normally before we start developing the service layer we have to create first the data access objects (DAOs) manually. It becomes tedious as you keep repeating the same implementation. Gladly, Spring is here to rescue us from this repetitive task with Spring Data project.

Spring Data's main purpose is to provide us ready-made data access objects for various database sources. This means all we need to do is write our service to contain the business logic. And since we're using a MongoDB, we will use the corresponding Spring Data project: Spring Data MongoDB. (To see a list of other Spring Data projects, visit http://www.springsource.org/spring-data).

IErrorLogRepository.java


That's it. We're done with the service. Wait, where's the service? We don't create one since we don't have custom business logic that needs to be wrapped in a service. All we're doing is retrieval of records.

C3. Aspect

Error persistence, at its core, is no different with logging. It's a crosscutting concern. And similar with how we implemented AMQP messaging, we will use an Aspect to add error persistence.

We have declared EventMongoAspect with two pointcut expressions: one for the service and another for the controller. The interceptService() method handles and persists service errors, while the interceptController() method handles controller errors. We don't persist controller errors; instead we provide a default value so that when returning JSP pages we're not propagating stack traces to the browser.

EventMongoAspect.java


C4. Controller

The error controller is a simple controller with the single purpose of serving the error page.

ErrorController.java


C5. View

Our view is a single JSP page containing a table and an AJAX request for retrieving records. The setup is similar with the event-page.jsp (see A4. View), except we don't have dialogs for adding, editing, or deleting records.

The getRecords() function is the AJAX request responsible for retrieval and filling-in of records to the table. Then it will convert the table to a DataTables. The addition of DataTables is mainly for aesthetic purposes (and some nifty features like sorting and searching).

error-page.jsp


Result

You can preview the final output by visiting a live deployment at http://spring-mysql-mongo-rabbit.cloudfoundry.com/error


Next Section

In the next section, we will build and deploy our project using Tomcat and Jetty. To proceed, click here.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 3 ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 2

Review

In the previous section, we created the core Event management system and utilized the Spring Data JPA project to simplify data access. We've also added AJAX functionality to make the application responsive. In this section, we will integrate RabbitMQ and messaging in general.

Where am I?

Table of Contents

  1. Event Management
  2. Messaging support
  3. Error persistence
  4. Build and deploy

Messaging Support

Our application is capable of broadcasting events as simple text messages. These messages are CRUD actions and exceptions that arise during the lifetime of the application. This means if a user performs an add, edit, delete, or get these actions will be published to a message broker. If there are any exceptions, we will also publish them.

Why do we need to publish these events? First, it helps us study and explore RabbitMQ messaging. Second, we can have a third-party application whose sole purpose is to handle these messages for intensive statistical analysis. Third, it allows us to monitor in real-time and check the status of our application instantaneously.

If you're unfamiliar with messaging and its benefits, I suggest visiting the following links:

We'll divide this page into four sections:
  1. B. Messaging Support
    • B1. Aspect
    • B2. Configuration
    • B3. Controller
    • B4. View

B1. Aspect

We will publish CRUD operations as simple messages. It's like logging except we send the logs to a message broker. Usually we would add this feature across existing classes. Unfortunately, this leads to "scattering" and " tangling" of code. No doubt--it's a crosscutting concern. We have the same scenario when added the logging feature earlier (See A5. Crosscutting Concerns section earlier).

We have created an Aspect, EventRabbitAspect.java, to solve this concern. To send messages, we use the AmqpTemplate where we pass the exchange, routing key, and the message. Since we want to publish messages as they are called and also errors as they happened, we use @Around to capture these events. The proceeding code is a variation of the original tutorial Chatting in the Cloud: Part 1 by Mark Fisher (http://blog.springsource.com/2011/08/16/chatting-in-the-cloud-part-1/)

EventRabbitAspect.java


In order for Spring to recognize this aspect, make sure to declare the following element in your XML configuration (we've done this in the trace-context.xml):



B2. Configuration

After declaring the Aspect, we now declare and configure the RabbitMQ and Spring AMQP specific settings. In fact, you will soon discover that most of the work with Spring AMQP is configuration-related: declaration of queues, bindings, exchanges, and connection settings.

We've declared two queues: eventQueue for normal events, and errorQueue for errors. We have a single exchange, eventExchange where we declare the bindings for the two queues.

In order to send messages to the eventQueue, we set the routing key to event.general.*. Likewise, to send messages to the errorQueue, we set the routing key to event.error.*. For a tutorial on these concepts, please visit the official examples at http://www.rabbitmq.com/getstarted.html

spring-rabbit.xml


B3. Controller

After configuring RabbitMQ and Spring AMQP, we declare a controller, aptly named MonitorController. It's main purpose is to handle monitoring requests. This controller is based on Chatting in the Cloud: Part 1 blog by Mark Fisher (http://blog.springsource.com/2011/08/16/chatting-in-the-cloud-part-1/).

MonitorController.java


B4. View

We have two JSP pages event-monitor-page.jsp and error-monitor-page.jsp for event and error views respectively. We've utilized AJAX to pull information from the application. Again, this is based on Chatting in the Cloud: Part 1 blog by Mark Fisher (http://blog.springsource.com/2011/08/16/chatting-in-the-cloud-part-1/).

event-monitor-page.jsp


error-monitor-page.jsp


Result

You can preview the final output by visiting our live app at http://spring-mysql-mongo-rabbit.cloudfoundry.com/monitor/event


Next Section

In the next section, we will integrate MongoDB and add error persistence to the current application. To proceed to the next section, click here.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 2 ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 1: DataTables View

Review

In the previous section, we created the core Event management system. In this section, we will work on the View layer. Data will be presented in a table. We will use DataTables, a jQuery plugin, and a custom JavaScript function to provide advance features to our table like sorting and searching.

Where am I?

Table of Contents

  1. Event Management
  2. Messaging support
  3. Error persistence
  4. Build and deploy

DataTables

What is DataTables?
DataTables is a plug-in for the jQuery Javascript library. It is a highly flexible tool, based upon the foundations of progressive enhancement, which will add advanced interaction controls to any HTML table. (Source: http://datatables.net/)

Before we proceed with the development, let's preview the final output:

Development

Since we've already discussed the controller and service classes in the previous section, we just need to discuss the JSP pages. We actually have five JSP pages:
  1. Primary page
  2. Supporting pages
    • add-dialog.jsp
    • edit-dialog.jsp
    • delete-dialog.jsp
    • generic-dialog.jsp

1. Primary page

The primary page contains the main view. It's a single page containing the following sections:
  1. URLs
  2. Imports
  3. Menu
  4. Table
  5. Dialogs
  6. Conversion of links to buttons
  7. Attaching link functions
  8. Retrieval of records and conversion to DataTables

event-page.jsp


Let's discuss each section:

a. URLs
The following section declares two URLs which uses the JSTL core tag: the root and the resources URLs respectively. These URLs are here for reusability purposes:



b. Imports
The following section imports a number of CSS and JavaScript resources which includes the core jQuery library, the DateJS library (a Date utility, http://www.datejs.com/), DataTables (a jQuery plugin, http://datatables.net/), and a custom jQuery plugin for retrieving records via AJAX and inserting records to a table automatically:



c. Menu
The following section declares a menu list:



d. Table
The following section creates an empty table and three HTML links (controller links) for adding, editing, and deleting of records. Take note of the table id, eventTable, because this id will be referenced multiple times:



e. Dialogs
The following section includes four external JSPs. The first three JSPs contain form elements for adding, editing, and deleting of records respectively. The fourth dialog is used for displaying generic messages.



f. Conversion of links to buttons
The following section converts the previous three links to a button. This is mainly for aesthetic purposes:



g. Attaching link functions
The following section attaches a function to our controller buttons. Each function will trigger a dialog box. These dialog boxes are the four dialog JSPs we included earlier:



h. Retrieval of records and conversion to DataTables
The following section calls a custom getRecords() function which will retrieve Event records, populate our table with data, and convert the table to DataTables:



If you want to disable DataTables, just change the previous function to the following:



Results

When we run the application, the result should be similar to the following image (this is an actual screenshot taken from a live deployment):


Limitations

Our DataTables page has some limitations. However, these are not DataTables limitations but rather something that we intentionally (and unintentionally) did not implement:
  • The status "Showing 1 to 4 Entries" has a bug when adding or deleting a record. You need to refresh the whole page to show the correct status
  • Date conversion is in integer format instead of Date
  • UI validation is not implemented

Playground

Some developers might not have time to build the entire project. Maybe they just want something to play around really fast. Because of that, I've deployed live samples to Cloud Foundry and added sample fiddles via JSFiddle.

JSFiddle

If you want to explore more about DataTables, I've provided fiddles for you to play around. These fiddles do not need any server-side programs to run. Feel free to fork them.

Here are the fiddles:
  • Plain table with static data: http://jsfiddle.net/krams/Us9S5/
  • Plain table with dynamic data: http://jsfiddle.net/krams/jD67t/
  • Add form: http://jsfiddle.net/krams/8QKAe/
  • Edit form: http://jsfiddle.net/krams/Kf4MF/
  • Full table with DataTables and buttons: http://jsfiddle.net/krams/9Syqc/

What is JSFiddle?
JsFiddle is a playground for web developers, a tool which may be used in many ways. One can use it as an online editor for snippets build from HTML, CSS and JavaScript. The code can then be shared with others, embedded on a blog, etc. Using this approach, JavaScript developers can very easily isolate bugs. We aim to support all actively developed frameworks - it helps with testing compatibility - Source: http://doc.jsfiddle.net/

Cloud Foundry

If you want to tinker with a live deployment, I suggest you visit the application's live site at http://spring-mysql-mongo-rabbit.cloudfoundry.com/event

What is Cloud Foundry?
Cloud Foundry is the open platform as a service project initiated by VMware. It can support multiple frameworks, multiple cloud providers, and multiple application services all on a cloud scale platform.

Next Section

In the next section, we will explore jQgrid, a jQuery plugin, for displaying tabular data. Read next.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC: Integrating MySQL, MongoDB, RabbitMQ, and AJAX - Part 1: DataTables View ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share