-
  GAYANB.COM
  Free IT Books, Study Guides, Practice Exams, Tutorials and Software
Thursday, July 24th 2014
-  Free Books
Free MSDN Mags
Free Oracle Mags
Free software CDs
- Certifications
Articles
SCJA
  Exam Details
  Mock exams
  Study guides
SCJP
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCJD
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCWCD
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCBCD
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCEA
  Exam Details
  Mock exams
  Study guides
  Sample chapters
MCAD/MCSD
  Mock exams
MCSE
  MCSE guides/exams
CCNA
  Exam Resources
- Java / J2EE
Articles
  Artima
  DevX
  JDJ
  JavaBoutique
  Performance
  Wireless
- .NET
Knowledge Base
Articles
  DevX
  .NET Framework
  ASP.NET
  C#
  VB.NET
  Visual Studio.NET
- About
Gayan Balasooriya

Broken links?
Suggest good links
To remove links
 weblogs from Javablogs.com

Red Hat JBoss Data Grid 6.3 is now available!

Red Hat's JBoss Data Grid is an open source, distributed, in-memory key/value data store built from the Infinispan open source software project. Whether deployed in client/server mode or embedded in a Java Virtual Machine, it is built to be elastic, high performance, highly available and to scale linearly.

JBoss Data Grid is accessible for both Java and non-Java clients. Using JBoss Data Grid, data is distributed and replicated across a manageable cluster of nodes, optionally written to disk and easily accessible using the REST, Memcached and Hot Rod protocol, or directly in process through a traditional Java Map API.

The key features of JBoss Data Grid are:

  • Schema-less key/value store for storing unstructured data
  • Querying to easily search and find objects
  • Security to store and restrict access to your sensitive data
  • Multiple access protocols with data compatibility for applications written in any language, using any framework
  • Transactions for data consistency
  • Distributed execution and map/reduce API to perform large scale, in-memory computations in parallel across the cluster
  • Cross-datacenter replication for high availability, load balancing and data partitioning

What's new in 6.3 ?

  • Expanded security for your data
    • User authentication via Simple Authentication and Security Layer (SASL)
    • Role based authorization and access control to Cache Manager and Caches
    • New nodes required to authenticate before joining a cluster
    • Encrypted communication within the cluster
  • Deploy into Apache Karaf and WebLogic
    • Use as an embedded or distributed cache in Red Hat JBoss Fuse integration flows
  • Enhanced map/reduce
    • Improved scalability by storing computation results directly in the grid instead of pushing them back to the application
    • Takes advantages of hardware's parallel processing power for greater computing efficiencies
  • New JPA cache store that preserves data schema
  • Improved remote query and C# Hot Rod client in technology preview
  • JBoss Data Grid modules for JBoss Enterprise Application Platform (JBoss EAP)

The complete list of new and updated features is described here.

How can this be installed on JBoss EAP ?

JBoss Data Grid has 2 deployment modes:

  • Library mode (embedded distributed caches)
  • Client-Server mode (remote distributed cache) - Install the Hot Rod client JARs in EAP, and have application reference these jars to use the Hot Rod protocol to connect to the JBoss Data Grid Server (remote cache).

Why a new C# client ?

The remote Hot Rod client is aware of the cluster topology and hashing scheme on Server and can get to a (k,v) entry in a single hop. In contrast, REST and memcached usually require an extra hop to get to an entry. As a results, Hot Rod protocol has higher performance, and is the preferred protocol (in Client-Server mode). JBoss Data Grid 6.1 only had a Java Hot Rod client - for all other languages, customers had to use memcached or REST. JBoss Data Grid 6.2 added C++ Hot Rod client. And now JBoss Data Grid 6.3 added a Tech Preview of C# client.

Infinispan has a lot more Hot Rod clients.

How would somebody use JBoss Data Grid with JBoss Fuse ?

The primary purpose is caching in integration workflows.

For example, remote JBoss Data Grid can be used with Fuse to cache search results.
REST can be used to communicate with a remote cache, but Hot Rod can now be used starting with JBoss Data Grid 6.3.

Fuse currently has camel-cache component which is based on EHCache. There is also a new camel-infinispan component was released in the community.

JBoss Data Grid 6.3 can be used with the community version of camel-infinispan.

Why would somebody use JBoss Data Grid on WebLogic ?

Customers who run WebLogic stack and eventually want to migrate to JBoss stack can start migration by replacing Oracle Coherence with JBoss Data Grid. And here is a comparison between the two offerings:

The complete documentation is available here and quick references are below:

Some useful references:



Data-driven unit testing in Java

Data-driven testing is a powerful way of testing a given scenario with different combinations of values. In this article, we look at several ways to do data-driven unit testing in JUnit.

Suppose, for example, you are implementing a Frequent Flyer application that awards status levels (Bronze, Silver, Gold, Platinum) based on the number of status points you earn. The number of points needed for each level is shown here:

level

minimum status points

result level

Bronze

0

Bronze

Bronze

300

Silver

Bronze

700

Gold

Bronze

1500

Platinum

Our unit tests need to check that we can correctly calculate the status level achieved when a frequent flyer earns a certain number of points. This is a classic problem where data-driven tests would provide an elegant, efficient solution.

Data-driven testing is well-supported in modern JVM unit testing libraries such as Spock and Spec2. However, some teams don’t have the option of using a language other than Java, or are limited to using JUnit. In this article, we look at a few options for data-driven testing in plain old JUnit.

Parameterized Tests in JUnit

JUnit provides some support for data-driven tests, via the Parameterized test runner. A simple data-driven test in JUnit using this approach might look like this:

@RunWith(Parameterized.class)
public class WhenEarningStatus {

    @Parameters(name = "{index}: {0} initially had {1} points, earns {2} points, should become {3} ")
    public static Iterable<Object[]> data() {
        return Arrays.asList(new Object[][]{
                {Bronze, 0,    100,  Bronze},
                {Bronze, 0,    300,  Silver},
                {Bronze, 100,  200,  Silver},
                {Bronze, 0,    700,  Gold},
                {Bronze, 0,    1500, Platinum},
        });
    }

    private Status initialStatus;
    private int initialPoints;
    private int earnedPoints;
    private Status finalStatus;

    public WhenEarningStatus(Status initialStatus, int initialPoints, int earnedPoints, Status finalStatus) {
        this.initialStatus = initialStatus;
        this.initialPoints = initialPoints;
        this.earnedPoints = earnedPoints;
        this.finalStatus = finalStatus;
    }

    @Test
    public void shouldUpgradeStatusBasedOnPointsEarned() {
        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")
                                            .named("Joe", "Jones")
                                            .withStatusPoints(initialPoints)
                                            .withStatus(initialStatus);

        member.earns(earnedPoints).statusPoints();

        assertThat(member.getStatus()).isEqualTo(finalStatus);
    }
}

You provide the test data in the form of a list of Object arrays, identified by the _@Parameterized@ annotation. These object arrays contain the rows of test data that you use for your data-driven test. Each row is used to instantiate member variables of the class, via the constructor.

When you run the test, JUnit will instantiate and run a test for each row of data. You can use the name attribute of the @Parameterized annotation to provide a more meaningful title for each test.

There are a few limitations to the JUnit parameterized tests. The most important is that, since the test data is defined at a class level and not at a test level, you can only have one set of test data per test class. Not to mention that the code is somewhat cluttered - you need to define member variables, a constructor, and so forth.

Fortunatly, there is a better option.

Using JUnitParams

A more elegant way to do data-driven testing in JUnit is to use [https://code.google.com/p/junitparams/|JUnitParams]. JUnitParams (see [http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22JUnitParams%22|Maven Central] to find the latest version) is an open source library that makes data-driven testing in JUnit easier and more explicit.

A simple data-driven test using JUnitParam looks like this:

@RunWith(JUnitParamsRunner.class)
public class WhenEarningStatusWithJUnitParams {

    @Test
    @Parameters({
            "Bronze, 0,   100,  Bronze",
            "Bronze, 0,   300,  Silver",
            "Bronze, 100, 200,  Silver",
            "Bronze, 0,   700,  Gold",
            "Bronze, 0,   1500, Platinum"

    })
    public void shouldUpgradeStatusBasedOnPointsEarned(Status initialStatus, int initialPoints,
                                                       int earnedPoints, Status finalStatus) {
        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")
                                            .named("Joe", "Jones")
                                            .withStatusPoints(initialPoints)
                                            .withStatus(initialStatus);

        member.earns(earnedPoints).statusPoints();

        assertThat(member.getStatus()).isEqualTo(finalStatus);
    }
}

Test data is defined in the @Parameters annotation, which is associated with the test itself, not the class, and passed to the test via method parameters. This makes it possible to have different sets of test data for different tests in the same class, or mixing data-driven tests with normal tests in the same class, which is a much more logical way of organizing your classes.

JUnitParam also lets you get test data from other methods, as illustrated here:

    @Test
    @Parameters(method = "sampleData")
    public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints,
                                                    int earnedPoints, Status finalStatus) {
        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")
                .named("Joe", "Jones")
                .withStatusPoints(initialPoints)
                .withStatus(initialStatus);

        member.earns(earnedPoints).statusPoints();

        assertThat(member.getStatus()).isEqualTo(finalStatus);
    }

    private Object[] sampleData() {
        return $(
                $(Bronze, 0,   100, Bronze),
                $(Bronze, 0,   300, Silver),
                $(Bronze, 100, 200, Silver)
        );
    }

The $ method provides a convenient short-hand to convert test data to the Object arrays that need to be returned.

You can also externalize

    @Test
    @Parameters(source=StatusTestData.class)
    public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints,
                                                    int earnedPoints, Status finalStatus) {
        ...
    }

The test data here comes from a method in the StatusTestData class:

    public class StatusTestData {
        public static Object[] provideEarnedPointsTable() {
            return $(
                    $(Bronze, 0,   100, Bronze),
                    $(Bronze, 0,   300, Silver),
                    $(Bronze, 100, 200, Silver)
            );
        }
    }

This method needs to be static, return an object array, and start with the word "provide".

Getting test data from external methods or classes in this way opens the way to retrieving test data from external sources such as CSV or Excel files.

JUnitParam provides a simple and clean way to implement data-driven tests in JUnit, without the overhead and limitations of the traditional JUnit parameterized tests.

Testing with non-Java languages

If you are not constrained to Java and/or JUnit, more modern tools such as Spock (https://code.google.com/p/spock/) and Spec2 provide great ways of writing clean, expressive unit tests in Groovy and Scala respectively. In Groovy, for example, you could write a test like the following:

class WhenEarningStatus extends Specification {

    def "should earn status based on the number of points earned"() {
        given:
        def member = FrequentFlyer.withFrequentFlyerNumber("12345678")
                .named("Joe", "Jones")
                .withStatusPoints(initialPoints)
                .withStatus(initialStatus);

        when:
        member.earns(earnedPoints).statusPoints()

        then:
        member.status == finalStatus

        where:
        initialStatus | initialPoints | earnedPoints | finalStatus
        Bronze        | 0             | 100          | Bronze
        Bronze        | 0             | 300          | Silver
        Bronze        | 100           | 200          | Silver
        Silver        | 0             | 700          | Gold
        Gold          | 0             | 1500         | Platinum
    }
}

John Ferguson Smart is a specialist in BDD, automated testing, and software life cycle development optimization, and author of BDD in Action and other books. John runs regular courses in Australia, London and Europe on related topics such as Agile Requirements Gathering, Behaviour Driven Development, Test Driven Development, and Automated Acceptance Testing.



Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 6 (Tech Tip #40)

This is the sixth part (part 1part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality.

Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.

Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.

Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.

Part 4 added a new test for the command and showed how Forge can be used in debug mode.

Part 5 fixed a bug reported by a community member and started work to make processor validation optional.

This part shows:

  • Upgrade from Forge 2.6.0 to 2.7.1
  • Fix the failing test
  • Reader, processor, and writer files are now templates instead of source files
  • Reader, processor, and writer are injected appropriately in test's temp project

Enjoy!

As always, the evolving source code is available at github.com/javaee-samples/forge-addons. The debugging will continue in the next episode.



And towards JSF 2.3 we go!

For all JSF folks out there. Some important news happened. What is it? Well, Ed Burns announced Oracle's intent to file JSF 2.3 with me as co-spec lead. See the email to the EG at https://java.net/projects/javaserverfaces-spec-public/lists/users/archiv...

Enjoy!



Shape the future of JBoss EAP and WildFly Web Console

Are you using WildFly ?

Any version of JBoss EAP ?

Would you like to help us define how the Web Console for future versions should look like ?

wildfly-8.1-admin-console

Help the Red Hat UX Design team shape the future of JBoss EAP and WildFly!

We are currently working to improve the usability and information architecture of the web-based admin console. By taking part in a short exercise you will help us better understand how users interpret the information and accomplish their goals.

You do not need to be an expert of the console to participate in this study. The activity shouldn't take longer than 10 to 15 minutes to complete.

To start participating in the study, click on the link below and follow the instructions.

http://ows.io/tj/12t0qr48

I completed the study in about 12 mins and was happy that my clicking around helped shape the future of JBoss EAP and WildFly!

Just take a quick detour from your routine for 10-15 mins and take the study.

Thank you in advance for taking the time to complete the study.



Getting Started with Docker (Tech Tip #39)

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well!

Lets hear it from @solomonstre - creator of Docker project!

In short, Docker simplifies software delivery by making it easy to build and share images that contain your application's entire environment, or application operating system.

What does it mean by application operating system ?

Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system.

You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker.

What are the main components of Docker ?

Docker has two main components:

  • Docker: the open source container virtualization platform
  • Docker Hub: SaaS platform for sharing and managing Docker images

Docker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this.

Images are "build component" of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are "run component" of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the "distribution component" of Docker.

Docker in turn contains two components:

  • Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers.
  • Client is a Docker binary that accepts commands from the user and communicates back and forth with daemon

How do these work together ?

Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.

docker-architecture-techtip39

Client can then start the Container using run command. The complete list of client commands can be seen here.

Client communicates with Daemon using sockets or REST API.

Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ?

Docker daemon and client for different operating systems can be installed from docs.docker.com/installation/. As you can see, it can be installed on a wide variety of platforms, including Mac and Windows.

For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac:

bash
unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH
mkdir -p ~/.boot2docker
if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi
/usr/local/bin/boot2docker init
/usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375
docker version
~> bash
~> unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH
~> mkdir -p ~/.boot2docker
~> if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi
~> /usr/local/bin/boot2docker init
2014/07/16 09:57:13 Virtual machine boot2docker-vm already exists
~> /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375
2014/07/16 09:57:13 Waiting for VM to be started...
.......
2014/07/16 09:57:35 Started.
2014/07/16 09:57:35 To connect the Docker client to the Docker daemon, please set:
2014/07/16 09:57:35     export DOCKER_HOST=tcp://192.168.59.103:2375
~> docker version
Client version: 1.1.1
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): bd609d2
Server version: 1.1.1
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): bd609d2

For example, Docker Daemon and Client can be installed on Mac following the instructions at docs.docker.com/installation/mac.

The VM can be stopped from the CLI as:
boot2docker stop
And then restarted again as:
boot2docker boot
And logged in as:
boot2docker ssh
The complete list of boot2docker commands are available in help:

~> boot2docker help
Usage: boot2docker [] []

boot2docker management utility.

Commands:
    init                    Create a new boot2docker VM.
    up|start|boot           Start VM from any states.
    ssh [ssh-command]       Login to VM via SSH.
    save|suspend            Suspend VM and save state to disk.
    down|stop|halt          Gracefully shutdown the VM.
    restart                 Gracefully reboot the VM.
    poweroff                Forcefully power off the VM (might corrupt disk image).
    reset                   Forcefully power cycle the VM (might corrupt disk image).
    delete|destroy          Delete boot2docker VM and its disk image.
    config|cfg              Show selected profile file settings.
    info                    Display detailed information of VM.
    ip                      Display the IP address of the VM's Host-only network.
    status                  Display current state of VM.
    download                Download boot2docker ISO image.
    version                 Display version information.

Enough talk, show me an example ?

Some of the JBoss projects are available as Docker images at www.jboss.org/docker and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as:

~> docker pull jboss/wildfly
Pulling repository jboss/wildfly
2f170f17c904: Download complete
511136ea3c5a: Download complete
c69cab00d6ef: Download complete
88b42ffd1f7c: Download complete
fdbe853b54e1: Download complete
bc93200c3ba0: Download complete
0daf76299550: Download complete
3a7e1274035d: Download complete
e6e970a0db40: Download complete
1e34f7a18753: Download complete
b18f179f7be7: Download complete
e8833789f581: Download complete
159f5580610a: Download complete
3111b437076c: Download complete

The image can be verified using the command:
~> docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
jboss/wildfly       latest              2f170f17c904        8 hours ago         1.048 GB

Once the image is downloaded, the container can be started as:
docker run jboss/wildfly
By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY.

In addition, we'd also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port.

So we'll run the container as:

docker run -i -t -p 80:8080 jboss/wildfly
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /opt/wildfly

  JAVA: java

  JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true

=========================================================================

22:08:29,943 INFO  [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final
22:08:30,200 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final
22:08:30,297 INFO  [org.jboss.as] (MSC service thread 1-6) JBAS015899: WildFly 8.1.0.Final "Kenny" starting
22:08:31,935 INFO  [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)
22:08:31,961 INFO  [org.xnio] (MSC service thread 1-7) XNIO version 3.2.2.Final
22:08:31,974 INFO  [org.xnio.nio] (MSC service thread 1-7) XNIO NIO Implementation Version 3.2.2.Final
22:08:32,057 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 31) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors
22:08:32,108 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem.
22:08:32,110 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 40) JBAS011800: Activating Naming Subsystem
22:08:32,133 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 45) JBAS013171: Activating Security Subsystem
22:08:32,178 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 38) JBAS012615: Activated the following JSF Implementations: [main]
22:08:32,206 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 46) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
22:08:32,348 INFO  [org.jboss.as.security] (MSC service thread 1-3) JBAS013170: Current PicketBox version=4.0.21.Beta1
22:08:32,397 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension
22:08:32,442 INFO  [org.jboss.as.connector.logging] (MSC service thread 1-13) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.5.Final)
22:08:32,512 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017502: Undertow 1.0.15.Final starting
22:08:32,512 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017502: Undertow 1.0.15.Final starting
22:08:32,570 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
22:08:32,660 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-10) JBAS010417: Started Driver service with driver-name = h2
22:08:32,736 INFO  [org.jboss.remoting] (MSC service thread 1-7) JBoss Remoting version 4.0.3.Final
22:08:32,836 INFO  [org.jboss.as.naming] (MSC service thread 1-15) JBAS011802: Starting Naming Service
22:08:32,839 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-15) JBAS015400: Bound mail session [java:jboss/mail/Default]
22:08:33,406 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017527: Creating file handler for path /opt/wildfly/welcome-content
22:08:33,540 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017525: Started server default-server.
22:08:33,603 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017531: Host default-host starting
22:08:34,072 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017519: Undertow HTTP listener default listening on /0.0.0.0:8080
22:08:34,599 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /opt/wildfly/standalone/deployments
22:08:34,619 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-9) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
22:08:34,781 INFO  [org.jboss.ws.common.management] (MSC service thread 1-13) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.4.Final
22:08:34,843 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://0.0.0.0:9990/management
22:08:34,844
INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://0.0.0.0:9990
22:08:34,845
INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 5259ms - Started 184 of 233 services (81 services are lazy, passive or on-demand)

Container's IP address can be found as:
~> boot2docker ip

The VM's Host only interface IP address is: 192.168.59.103

The started container can be verified using the command:
~> docker ps
CONTAINER ID        IMAGE                  COMMAND                CREATED             STATUS              PORTS                NAMES
b2f8001164b0        jboss/wildfly:latest   /opt/wildfly/bin/sta   46 minutes ago      Up 12 minutes       8080/tcp, 9990/tcp   sharp_pare

And now the WildFly server can now be accessed on your local machine as http://192.168.59.103 and looks like as shown:

Finally the container can be stopped by hitting Ctrl + C, or giving the command as:

~> docker stop b2f8001164b0
b2f8001164b0

The container id obtained from "docker ps" is passed to the command here.

More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at github.com/jboss/dockerfiles/blob/master/wildfly/README.md.

What else would you like to see in the WildFly Docker image ? File an issue at github.com/jboss/dockerfiles/issues.

Other images that are available at jboss.org/docker are:

 

Did you know that Red Hat is amongst one of the top contributors to Docker, with 5 Red Hatters from Project Atomic working on it ?



Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 5 (Tech Tip #38)

This is the fourth part (part 1part 2, part 3, part 4) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality.

Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.

Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.

Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.

Part 4 added a new test for the command and showed how Forge can be used in debug mode.

This part shows:

  • Fix a bug reported by a community member
  • Started work another issue to make processor validation optional

Enjoy!

As always, the evolving source code is available at github.com/javaee-samples/forge-addons. The debugging will continue in the next episode.



From framework to platform

When I started my career as a Java developer close to 10 years ago, the industry is going through a revolutionary change. Spring framework, which was released in 2003, was quickly gaining ground and became a serious challenger to the bulky J2EE platform. Having gone through the transition time, I quickly found myself in favour of Spring framework instead of J2EE platform, even the earlier versions of Spring are very tedious to declare beans.

What happened next is the revamping of J2EE standard, which was later renamed to JEE. Still, dominating of this era is the use of opensource framework over the platform proposed by Sun. This practice gives developers full control over the technologies they used but inflating the deployment size. Slowly, when cloud application become the norm for modern applications, I observed the trend of moving the infrastructure service from framework to platform again. However, this time, it is not motivated by Cloud application.

Framework vs Platform


I have never heard of or had to used any framework in school. However, after joining the industry, it is tough to build scalable and configurable software without the help of any framework.

From my understanding, any application is consist of codes that implement business logic and some other codes that are helpers, utilities or to setup infrastructure. The codes that are not related to business logic, being used repetitively in many projects, can be generalised and extracted for reuse.  The output of this extraction process is framework.

To make it shorter, framework is any codes that is not related to business logic but helps to dress common concerns in applications and fit to be reused.

If following this definition then MVC, Dependency Injection, Caching, JDBC Template, ORM are all consider frameworks.

Platform is similar to framework as it also helps to dress common concerns in applications but in contrast to framework, the service is provided outside the application. Therefore, a common service endpoint can serve multiple applications at the same time. The services provided by JEE application server or Amazon Web Services are sample of platforms.

Compare the two approaches, platform is more scalable, easier to use than framework but it also offers less control. Because of these advantage, platform seem to be the better approach to use when we build Cloud Application.

When should we use platform over framework

Moving toward platform does not guarantee that developers will get rid of framework. Rather, platform only complements framework in building applications. However, one some special occasions we have a choice to use platform or framework to achieve final goal.  From my personal opinion, platform is greater that framework when following conditions are matched:

  • Framework is tedious to use and maintain
  • The service has some common information to be shared among instances.
  • Can utilize additional hardware to improve performance.
In office, we still uses Spring framework, Play framework or RoR in our applications and this will not change any time soon. However, to move to Cloud era, we migrated some of our existing products from internal hosting to Amazon EC2 servers. In order to make the best use of Amazon infrastructure and improve software quality, we have done some major refactoring to our current software architecture. 
Here are some platforms that we are integrating our product to:
Amazon Simple Storage Service (Amazon S3) &  Amazon Cloud Front

We found that Amazon Cloud Front is pretty useful to boost average response time for our applications. Previously, we host most of the applications in our internal server farms, which located in UK and US. This lead to noticeable increase in response time for customers in other continents. Fortunately, Amazon has much greater infrastructure with server farms built all around the worlds. That helps to guarantee a constant delivery time for package, no matter customer locations.

Currently, due to manual effort to setup new instance for applications, we feel that the best use for Amazon Cloud Front is with static contents, which we host separately from application in Amazon S3. This practice give us double benefit in performance with more consistent delivery time offered by the CDN plus the separate connection count in browser for the static content.

Amazon Elastic Cache

Caching has never been easy on cluster environment. The word "cluster" means that your object will not be stored and retrieve from system memory. Rather, it was sent and retrieved over the network. This task was quite tricky in the past because developers need to sync the records from one node to another node. Unfortunately, not all caching framework support this feature automatically. Our best framework for distributed caching was Terracotta.

Now, we turned to Amazon Elastic Cache because it is cheap, reliable and save us the huge effort for setting up and maintain distributed cache. It is worth to highlight that distributed caching is never mean to replace local cache. The difference in performance suggest that we should only use distributed caching over local caching when user need to access real-time temporary data.

Event Logging for Data Analytics

In the past, we used Google Analytics for analysing user behaviour but later decided to build internal data warehouse. One of the motivation is the ability to track events from both browsers and servers. The Event Tracking system uses MongoDB as the database as it allow us to quickly store huge amount of events.

To simplify the creation and retrieval of events, we choose JSON as the format for events. We cannot simply send this event directly to event tracking server due to browser prevention of cross-domain attack. For this reason, Google Analytic send the events to server under the form of a GET request for static resource. As we have the full control over how the application was built, we choose to let the events send back to application server first and route to event tracking server later. This approach is much more convenient and powerful.

Knowledge Portal

In the past, applications access data from database or internal file repository. However, to be able to scale better, we gathered all knowledge to build a knowledge portal. We also built query language to retrieve knowledge from this portal. This approach add one additional layer to the knowledge retrieval process but fortunately for us, our system does not need to serve real time data. Therefore, we can utilize caching to improve performance.

Conclusion

Above is some of our experience on transforming software architecture when moving to the Cloud. Please share with us your experience and opinion.



BDD Requirements Management with JBehave, Thucydides and JIRA - Part 2

Thucydides is an open source library designed to make practicing Behaviour Driven Development easier. Thucydides plays nicely with BDD tools such as JBehave, or even more traditional tools like JUnit, to make writing automated acceptance tests easier, and to provide richer and more useful living documentation. In this series of articles, we look at the tight one and two-way integration that Thucydides offers with JIRA. The first article discussed basic one-way integration with JIRA. In this article, we will looking at taking that integration further. We will see how to insert links to the Thucydides reports into JIRA, how to update the state of JIRA issues based on the Thucydides test outcomes, and how to report on JIRA versions and releases in the Thucydides reports.

The rest of this article assumes you have some familiarily with Thucydides. For a tutorial introduction to Thucydides, check out the Thucydides Documentation or this article for a quick introduction.

The simplest form of two-way integration between Thucydides and JIRA is to get Thucydides to insert a comment containing links to the Thucydides test reports for each related issue card. To get this to work, you need to tell Thucydides where the reports live. One way to do this is to add a property calledthucydides.public.url to your thucydides.properties file with the address of the thucydides reports.

thucydides.public.url=http://buildserver.myorg.com/latest/thucydides/report

This will tell Thucydides that you not only want links from the Thucydides reports to JIRA, but you also want to include links in the JIRA cards back to the corresponding Thucydides reports. When this property is defined, Thucydides will add a comment like the following to any issues associated with the executed tests:

images/jira-thucydides-comment.png

The thucydides.public.url will typically point to a local web server where you deploy your reports, or to a path within your CI server. For example you could publish the Thucydides reports on Jenkins using theJenkins HTML Publisher Plugin, and then add a line like the following to your thucydides.properties file:

thucydides.public.url=http://jenkins.myorg.com/job/myproject-acceptance-tests/Thucydides_Report/

If you do not want Thucydides to update the JIRA issues for a particular run (e.g. when running your tests locally), you can also set thucydides.skip.jira.updates to true, e.g.

thucydides.skip.jira.updates=true

This will simply write the relevant issue numbers to the log rather than trying to connect to JIRA.

Updating JIRA issue states

You can also configure the plugin to update the status of JIRA issues. This is deactivated by default: to use this option, you need to set the thucydides.jira.workflow.active option to true, e.g.

thucydides.jira.workflow.active=true

The default configuration will work with the default JIRA workflow: open or in progress issues associated with successful tests will be resolved, and closed or resolved issues associated with failing tests will be reopened. If you are using a customized workflow, or want to modify the way the transitions work, you can write your own workflow configuration. Workflow configuration uses a simple Groovy DSL. The following is an example of the configuration file used for the default workflow:

    when 'Open', {
        'success' should: 'Resolve Issue'
    }

    when 'Reopened', {
        'success' should: 'Resolve Issue'
    }

    when 'Resolved', {
        'failure' should: 'Reopen Issue'
    }

    when 'In Progress', {
        'success' should: ['Stop Progress','Resolve Issue']
    }

    when 'Closed', {
        'failure' should: 'Reopen Issue'
    }

You can write your own configuration file and place it on the classpath of your test project (e.g. in theresources directory). Then you can override the default configuration by using thethucydides.jira.workflow property, e.g.

thucydides.jira.workflow=my-workflow.groovy

Alternatively, you can simply create a file called jira-workflow.groovy and place it somewhere on your classpath (e.g. in the src/test/resources directory). Thucydides will then use this workflow. In both these cases, you don’t need to explicitly set the thucydides.jira.workflow.active property.

Release management

In JIRA, you can organize your project releases into versions, as illustrated here:

images/jira-versions.png

You can and assign cards to one or more versions using the Fix Version/s field:

images/jira-fix-versions.png

By default, Thucydides will read version details from the Releases in JIRA. Test outcomes will be associated with a particular version using the "Fixed versions" field. The Releases tab gives you a run-down of the different planned versions, and how well they have been tested so far:

images/releases-tab.png

JIRA uses a flat version structure - you can’t have for example releases that are made up of a number of sprints. Thucydides lets you organize these in a hierarchical structure based on a simple naming convention. By default, Thucydides uses "release" as the highest level release, and either "iteration" or "sprint" as the second level. For example, suppose you have the the following list of versions in JIRA - Release 1 - Iteration 1.1 - Iteration 1.2 - Release 2 - Release 3

This will produce Release reports for Release 1, Release 2, and Release 3, with Iteration 1.2 and Iteration 1.2 appearing underneath Release 1. The reports will contain the list of requirements and test outcomes associated with each release. You can drill down into any of the releases to see details about that particular release

images/releases.png

You can also customize the names of the types of release usinge the thucydides.release.typesproperty, e.g.

thucydides.release.types=milestone, release, version

Conclusion

Thucydides has powerful one and two-way integration with JIRA. In these articles, we have seen how you can incoporate links from Thucydides to JIRA, from JIRA to Thucyides, and even update the status of issues in JIRA based on the test results. And, if you are managing your versions in JIRA, you can also report on how well each version has been tested, and what remains to be tested before the next release.

Want to learn more? Be sure to check out the Thucydides web site, the Thucydides Blog, or join theThucydides Google Users Group to join the discussion with other Thucydides users.

Wakaleo Consulting, the company behind Thucydides, also runs regular courses in Australia, London and Europe on related topics such as Agile Requirements GatheringBehaviour Driven DevelopmentTest Driven Development, and Automated Acceptance Testing.



How JDK 8 standardizes and augments to Guava library functionalities

JDK 8 introduced a lot of new features and improvements in the platform from Lamda expressions, Stream collection types,
Functional interfaces, Type annotations, Nashorn
etc.

Guava library from Google provided some support for functional programming idioms prior to JDK 8.
I have been using Guava for some of my projects. So here is a small write up on
how new functionality added in JDK 8 makes it possible to achieve standardized way to functionality offered by Google's Guava.
This article further highlights similarities and differences between the two APIs and was inspired by this discussion on google groups.

The following table shows some of the API which I will cover in detail wrt. Guava and JDK 8

Functionality Guava JDK 8
Predicate apply(T input) test(T input)
Combining predicates Predicates.and/or/not Predicate.and/or/negate
Supplier Suplier.get Supplier.get
Joiner/StringJoiner Joiner.join() StringJoiner.add()
SettableFuture/CompletableFuture SettableFuture.set(T input) CompletableFuture.complete(T input)
Optional Optional.of/ofNullable/empty Optional.of/fromNullable/absent

Source Code

The following code snippets are part of a complete sample available at https://github.com/bhakti-mehta/samples/tree/master/jdk8-and-guava
For the sake of simplicty, I have a simple sample which has a collection of people's data.
We start with a simple POJO Person as shown below for both the JDK 8 and Guava cases

public class Person {

    private String firstName;

    private String lastName;

    private int age;

    private Optional suffix;
...
...

As shown in the above snippet the Person class has fields like firstName, lastName, age, an Optional suffix and getters and setters for these.

1.0 Predicates

A Predicate is a boolean valued function for an argument.
Now we will define Predicate in Guava and JDK 8 and show how to get the list of people whose age is over 30.

The following snippet shows how to use a Predicate which has an apply (Person input) method that takes a Person object as input and validates if the age of the person is above 30

1.1 Predicate with Guava

Here is the code showing how to use com.google.common.base.Predicate

  final List persons = Person.createList();

        final Predicate ageOver30 = new Predicate() {
            public boolean apply(Person input) {
                return input.getAge() > 30;
            }
           
        };

        Collection filteredPersons = Collections2.filter(persons,
                ageOver30);
       

The above snippet returns a Collection that satisfy that predicate ageOver30 by using Collections2.filter() method which takes a Predicate as an argument.

1.2 Predicate with JDK8

Here is a snippet of how to achieve the same behaviour using java.util.function.Predicate
The Predicate has a test method what checks for the ageOver30 Predicate

final List persons = Person.createList();

        final Predicate ageOver30 = new Predicate() {
            public boolean test(Person person) {
                return person.getAge() > 30;

            }
        };

        Stream filteredPersons = persons.stream().filter(
                ageOver30);

The above snippet transforms (List) into a Stream with the stream() method on the Collection interface. The filter() method takes the ageOver30 Predicate and returns a stream that satisfies the criteria.

2.0 Combining Predicates

Predicates can be combined with other predicates. For example in our sample we need to find a list of people
whose age is over 30 and whose name begins with "W", we can achieve this functionality with creating two
Predicates

  • ageOver30
  • nameBeginsWith

Next we combine the 2 predicates by calling the and method with these two predicates.

2.1 Combining Predicates with Guava

Here is a code snippet with Guava predicate class which defines 2 predicates ageOver30 and nameBeginsWith

        final List persons = Person.createList();

        final Predicate ageOver30 = new Predicate() {
            public boolean apply(Person input) {
                return input.getAge() > 30;
            }

        };
        final Predicate nameBeginsWith = new Predicate() {
            public boolean apply(Person person) {
                return person.getLastName().startsWith("W");

            }
        };

        Collection filteredPersons = Collections2.filter(persons,
                Predicates.and(ageOver30, nameBeginsWith));

The above snippet returns a filtered list from the Collections2.filter() method by passing the combined predicates Predicates.and(ageOver30,nameBeginsWith)

2.2 Combining Predicates with JDK 8

Here is the same functionality using java.util.function.Predicate.and/or/negate

 public Stream getMultiplePredicates() {
        final List persons = Person.createList();

        final Predicate ageOver30 = new Predicate() {
            public boolean test(Person person) {
                return person.getAge() > 30;

            }
        };
        final Predicate nameBeginsWith = new Predicate() {
            public boolean test(Person person) {
                return person.getLastName().startsWith("W") ;

            }
        };

        Stream filteredPersons = persons.stream().filter(
                ageOver30.and(nameBeginsWith));
        return filteredPersons;
    }

The above snippet returns a stream by filtering the combined ageOver30.and(nameBeginsWith)) predicates.

3.0 Supplier

Supplier is a functional interface that encapsulates an operation and allows lazy evaluation of the operation. It supplies objects of a particular type.

3.1 Supplier in Guava

Here is a snippet of how to use com.google.common.base.Supplier in Guava

public int getSupplier() {
        Supplier person = new Supplier() {
            public Person get() {
                return new Person("James", "Sculley", 53,Optional.of("Sr"));
            }
        };

        return person.get().getAge();
    }

As seen in the above snippet we create a new Supplier and the get() method returns a new instance of Person.

3.2 Supplier in JDK 8

The following code shows how to create a java.util.function.Supplier with Lamda expressions in JDK 8.

public int getSupplier() {
        final List persons = Person.createList();
        Supplier anotherone = () -> { Person psn = new Person("James", "Sculley", 53, Optional.of("Sr"));
            return psn;
        };

        return anotherone.get().getAge();
    }

As shown in above snippet, similar to the Guava case, we create a new Supplier and the get() method returns a new instance of Person.

4.0 Joiner/StringJoiner

A Joiner in Guava/ StringJoiner in JDK8 joins text together separated by delimiters.

4.1 Joiner in Guava

Here is an example of a Joiner in Guava which joins the various string delimited by ';'

    public String getJoiner() {
        Joiner joiner = Joiner.on("; ");
        return joiner.join("Violet", "Indigo", "Blue", "Green", "Yellow", "Orange", "Red");
    }

4.2 StringJoiner in JDK 8

The following snippet shows how the equivalent functionality in JDK 8 is:

    public String getJoiner() {
        StringJoiner joiner = new StringJoiner("; ");
        return joiner.add("Violet").add( "Indigo").add( "Blue").add( "Green")
        .add("Yellow").add( "Orange").add( "Red").toString();
    }

5.0 java.util.Optional

java.util.Optional is a way for programmers to indicate that there may have been a value initially, that is now set to null or no value was ever found.

5.1 Optional in Guava

Here is a sample of com.google.common.base.Optional

  • Optional.of(T) :Make an Optional containing the given non-null value, or fail fast on null.
  • Optional.absent(): Return an absent Optional of some type.
  • Optional.fromNullable(T): Turn the given possibly-null reference into an Optional, treating non-null as present and null as absent.

Here is the code which declares the suffix of a Person as Optional

Optional suffix = Optional.of("Sr")

5.2 Optional in JDK8

  • Optional.of(T); Returns an Optional with the specified present non-null value.
  • Optional.ofNullable(T);Returns an Optional describing the specified value, if non-null, otherwise returns an empty Optional
  • Optional.empty(); Returns an empty Optional instance. No value is present for this Optional.

Here is the code which declares the suffix of a Person as Optional

Optional suffix = Optional.of("Sr")

6.0 SettableFuture in Guava/ CompletableFuture in JDK 8

These extend the Future and provide asynchronous, event-driven programming model in contrast to the blocking nature of Future in java.

SettableFuture is similar to CompletableFuture in JDK 8 which can help to create a Future object for an event or a task which will occur. Code calling future.get() will block forever. When the asynchronous task finishes execution it calls future.set(). Now all the code blocking on Future.get() will get the details.

6.1 SettableFuture in Guava

Here is a simple case demonstrating this functionality in Guava and JDK8

public SettableFuture getSettableFuture() {
        final SettableFuture future = SettableFuture.create();
        return future;
    }

    public void handleFutureTask(SettableFuture sf) throws InterruptedException {
        Thread.sleep(5000);
        sf.set("Test");
    }

In the above snippet we create a new SettableFuture in default state using SettableFuture.create(). The set() method sets the value for this future object.

6.2 CompletableFuture in JDK8

The following code shows how the equivalent functionality is achieved with CompletableFuture in JDK 8.

public CompletableFuture getCompletableFuture() {
        final CompletableFuture future = new CompletableFuture<>();
        return future;
    }

    public void handleFutureTask(CompletableFuture cf) throws InterruptedException {
        Thread.sleep(5000);
        cf.complete("Test");
    }

As shown in the above snippet we create a CompletableFuture and invoke the complete method to set the value for this future object.

The above samples showed how JDK 8 standardizes in the platform and augments to some of the functionality with Guava library aimed to provide with JDK7. JDK 8 has been a great leap in terms of the newer capabilities it provided. Guava will definitely provide additional improvements using the standardized API.




All brand names,logos and trademarks in this site are property of their respective owners.

-  Free Magazines


Free Magazine
-  News
Java/J2EE/J2ME
  java.sun.com
  TheServerSide
  Wireless Java
  Javable
.NET
  MSDN
Certification
  CertCities
  MCPMag
Industry News
  CNET News
  CNET E-Business
  CNET Enterprise
  InfoWorld
  eWeek
  WiredNews
-  Weblogs
JavaBlogs
James Gosling's
-  Tell A Friend
Tell others
Free eBooks |  About |  Disclaimer |  Terms Of Use |  Privacy Policy
Copyright 2001-2006 Gayan Balasooriya.   
All Rights Reserved.