-
  GAYANB.COM
  Free IT Books, Study Guides, Practice Exams, Tutorials and Software
Tuesday, July 28th 2015
-  Free Books
Free MSDN Mags
Free Oracle Mags
Free software CDs
- Certifications
Articles
SCJA
  Exam Details
  Mock exams
  Study guides
SCJP
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCJD
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCWCD
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCBCD
  Exam Details
  Mock exams
  Study guides
  Sample chapters
SCEA
  Exam Details
  Mock exams
  Study guides
  Sample chapters
MCAD/MCSD
  Mock exams
MCSE
  MCSE guides/exams
CCNA
  Exam Resources
- Java / J2EE
Articles
  Artima
  DevX
  JDJ
  JavaBoutique
  Performance
  Wireless
- .NET
Knowledge Base
Articles
  DevX
  .NET Framework
  ASP.NET
  C#
  VB.NET
  Visual Studio.NET
- About
Gayan Balasooriya

Broken links?
Suggest good links
To remove links
 weblogs from Javablogs.com

Back from the JCrete Unconference

Some time ago, I got an invitation from Heinz Kabutz (the man behind the Java Specialists newsletter, to which you should subscribe right away if you haven't already), to join the JCrete conference. ♦

My wife took a dim view of this. “You mean, there is no program? You'll just stand around and drink beer and chat?” I tried to explain to her that it's no different for me when I go to Java One, where I learn more from the hallway conversations than from the sessions. The solution was to take the entire family along.

It turned out to be great. To see what JCrete is all about, read the blogs by Geertjan Wielenga and Fabian Lange who explain the “unconference” approach very nicely. Here you can see it in action. The clusters of folks in the water at the beach of Falarnassa are conference attendees, who, being geeks, mostly talk tech. I learned a bunch about sun.misc.Unsafe while swimming.

I co-lead a discussion on what one would like to see in java.util.stream in the future. Compared for example with the Scala API, streams are missing quite a few useful operations. For example, there is no zip (it was removed). There is no convenient way of turning iterators or iterables into streams (also removed).

There are good reasons for these omissions. Zipping is best done with pairs, and pairs are best done with value types. A stream method on iterables might work better when we have specialization of generics. It makes sense to wait until these features are ready, perhaps in Java 10. (Nothing much is going to happen with streams in JDK 9. I only found one new stream-related method: Optional.stream turns an Optional into a stream of length 0 or 1.)

What can you do in the meantime? We found a number of libraries that provide stream-like abstractions with richer APIs: LazySeq, ProtonPack, jOOλ, JavaSlang.

Personally, I like to use streams for the “what, not how” API, not so much for parallel streams. Sure, parallel streams are impressive, but they are also quite specialized. They work well when you want to keep all cores busy, with data from an in-memory data structure that is efficiently splittable, and stream operations that are computationally intensive. I don't have many situations like that in practice, which explains why my teaching examples always seem to involve factoring BigInt.

Some of the attendees reported from their consulting jobs where they saw eager stream users add .parallel() everywhere. Clearly, that's a terrible idea. If the data come from a file or database, it won't be split efficiently with the fork-join framework. If the stream operations block, the fork-join pool can starve. And in an app server, does one really want to go full bore on all cores? (It is possible to constrain the pool, but not obvious.)

Other than telling people to be careful about parallel streams, what did I learn? I'll have to figure out how to build Project Valhalla so that I can learn all about value types and generic specialization. I won't have to learn about sun.misc.Unsafe. DukeScript looks cool, and I'll have to check it out.

Most importantly, I learned about JCrete. The unconference format is great, the location is unbeatable, and the attendees have been amazing. Check out some videos from Steven Chin's blog here.



Developing professional Java application with NetBeans IDE at university

Last saturday, 11th july 2015, I was at Ecole Supérieure d'Informatique de Bobo Dioulasso for a training session on how to build a professional Java application with NetBeans IDE.
During five hours I did an overview of the editor and developed with attendees a small Java web application mixing PrimeFaces, JPA and Mysql as database.
The experience was a success since the students where very happy to learn the power of NetBeans and all the possibilities it offers to them. They found it very easy to use to build a sophisticated application and very fast as ever. This session has been also an opportunity to show that NetBean is not only Java editor, but a tool to build several kind of projects ( HTML5, PHP, Groovy, C/C++, etc). One of them was too enthusiastic and decided to move to NetBeans IDE for his Zend framework based project.

We NetBeans :)

This is a screenshot of the frontend of our application !



Managing Concurrency in SIP Servlet 2.0
There are some key differences to programming SIP applications compared to usual Java EE applications, which are mostly based on HTTP. Often SIP Servlets are used to create a service that involve multiple SIP Sessions and SIP Application Sessions. The SIP protocol is much more asynchronous than HTTP and hence SIP Servlet POJOs need to handle many kinds of requests and responses. Both these can lead to a number of scenarios which can cause concurrency issues. Lets look at one such scenario which can cause a deadlock.

As you may see above, there may be situations two threads in an application may access the SIP Application Sessions locked by the other thread. Obviously, this also depends how exactly the container is handling the concurrency. In any case, it is a clear issue in writing portable applications.

SIP Servlet 2.0 introduces two capabilities to solve this issue.

1) A standard concurrency control mode for SIP Servlet Applications.

package com.example;
import javax.servlet.sip.SipServlet;
@SipApplication (name = “Foo”, concurrencyMode =
ConcurrencyMode.APPLICATIONSESSION)

If you have an annotation like this in your SIP Servlet POJO, then the container performs concurrency control at the level
of the application session. It ensures various tasks that access the application session are not executed at the same time.

2)A set of concurrency utilities for SIP

Making use of JSR 236 (Concurrency Utilities for JavaTM EE) SIP Servlet 2.0 defined a few concurrency utilities for SIP Servlets. It help SIP Servlet Application to run a task concurrently with a specified SIP Application Session as the context. Note that the general behavior of these utilities are defined by JSR 236. SIP Servlet 2.0 defines the behavior of the SIP Concurrency Utilities for SIP Servlet Applications. Following Default Managed Objects are specified.

  • ManagedExecutorService : java:comp/ManagedSipExecutorService
  • ManagedScheduledExecutorService : java:comp/ManagedScheduledSipExecutorService
  • ContextService : java:comp/SipContextService
  • ManagedThreadFactory : java:comp/ManagedSipThreadFactory

Apart from these, the SIP Servlet 2.0 also defines three execution properties that allow an application to specify a SIP Application Session to use as the context while running the tasks. See below for the properties.

  • javax.servlet.sip.ApplicationSessionKey : Specifies the SIP application key.
  • javax.servlet.sip.ApplicationSessionId : Specifies the application session
    ID.
  • javax.servlet.sip.ApplicationSession.create : Indicates that the container creates a new SipApplicationSession and use it as the context.
2.1) Submitting a task with the SIP Application Session of SIP Servlet POJO as the context,

To achieve this, application may follow the steps below.

  1. Inject a java:comp/ManagedSipExecutorService using @Resource annotation
  2. Create a Callable or Runnable that contain the business logic.
  3. Submit the Callable or Runnable from the SIP Servlet POJO.

Here is an example code for the same.

@SipServlet
public class ExamplePOJO1{
  @Resource(lookup=”java:comp/ManagedSipExecutorService”)
  ManagedExecutorService mes;
  @Invite
  protected void onInvite(SipServletRequest req) {
    // Create a task instance. MySipTask implements Callable...
    MySipTask sipTask = new MySipTask();
    // Submit the task to the ManagedExecutorService...
    Future sipFuture = mes.submit(sipTask);
  }
}
2.2) Submitting a task with a specific SIP Application Session as the context

To achieve this, application may follow the steps below.

  1. Inject a java:comp/ManagedSipExecutorService using @Resource annotation
  2. Create a Callable or Runnable that contain the business logic.
  3. Create a Contextual Proxy object of the Callable or Runnable using the java:comp/SipContextService utility.
  4. Specify the SIP application Session (either ID or Key) as one of the execution properties.
  5. Submit the Contextual Proxy object.

Here is an example code to explain how this may be done.

 @Resource(lookup = "java:comp/SipContextService")
  ContextService sipCS;

  @Resource(lookup = "java:comp/ManagedSipExecutorService")
  ManagedExecutorService sipMES;

  @Inject
  SipSessionsUtil ssu;

  public void doAsync(final String sasId, final String appState) {
    Map props = new HashMap<>();
    props.put(SipServlet.SIP_APPLICATIONSESSION_ID, sasId);

    final SipSessionsUtil util = ssu;

    Runnable task = (Runnable & Serializable) () -> {
      final SipApplicationSession session =
        util.getCurrentApplicationSession();
      int counter = (int) session.getAttribute("counter");
      session.setAttribute("counter", ++counter);
      session.setAttribute("appState", appState);
    };

    Runnable proxyTask = (Runnable) sipCS.createContextualProxy
      (task, props, Runnable.class, Serializable.class);
    sipMES.submit(proxyTask);
  }

With these capabilities, the application can run the above scenario without concurrency issues.

This is just a snapshot of the functionality being provided. There are a number of advanced capabilities possible. Some of the interesting ones are given below

  1. Use java:comp/ManagedScheduledExecutorService for scheduling tasks instead of TimerService
  2. Run the contextual proxy with specified SIP Application Session as the context directly on a thread (eg: MDB thread) without submitting the task
  3. You may also use a ManagedTask instead of Contextual Proxy.

Sorry for the long article. Hope this helps writing SIP Servlet Applications with better concurrency control and portability.



And we have moved!

And we have moved to https://www.manorrock.com/blog/



Webinar Notes: Typesafe William Hill Omnia Patrick Di Loreto

Webinar Notes: Typesafe William Hill Omnia Patrick Di Loreto

My friend Oliver White is doing his usual bang-up job in his new gig at TypeSafe. One aspect is the humble webinar. Here are my notes for one that caught my eye, Using Spark, Kafka, Cassandra and Akka on Mesos for Real-Time Personalization. This is a very dense but well delivered presentation by Patrick Di Loreto who helped develop a new platform for his employer, the online gambling service, William Hill.

Morally, I am sensitive to the real damage done to real lives and families that is caused by gambling, so I will include a link to an organization that offers help: 1-800-GAMBLER. That said, this is just another instance of the ancient tradition of technology development being driven by something that traditionally is seen as vice. (For a humorous, NSFW and prophetic Onion article, search google for “theonion internet andreessen viewing device”. I’m old enough to have first read that on an actual physical newspaper!)

Now, on to the raw notes. YMMV of course, but if nothing else this will help you overcome the annoying problem of the slide advancing not being synched to the audio.

Context: presentation by Patrick Di Loreto (@patricknoir) R&D
Engineering lead for William Hill online betting.  The presenation was
done for Typesafe as a Webinar on 14 June 2015.

They have a new betting platform they call Omnia.

- Need to handle massive amount of data

- Based on Lambda Architecture from Nathan Marz
  <http://lambda-architecture.net/>.

- Omnia is a platform that includes four different components

  * Chronos - Data Source

  * Fates - Batch Layer

  * NeoCortex - Speed layer

  * Hermes - Serving layer

03:47 definition of Lambda Architecture

  “All the data must come from a unique place (data source)

  They separate access to the data source into two different modes based
  on timeliness requirements.

  NeoCortex (Speed Layer) is to access the data in real time, but
  without some consistency and correctness guarantees.  Optimized for
  speed.  It has only recent data.

  Fates (Batch Layer) is to access the data not in real time, but with
  more (complete?) consistency.

05:00 Reactive Manifesto slide

06:15 importance of elasticty for them

06:47 Chronos Data Source: “It’s nothing else than a container for
active streams”

  “Chronos is a sort of middleware.  It can talk to the outside world
  and bring the data into their system.”  Organize the data into a
  stream of observable events, called “incidents”.  Can have different
  viewpoints for different concerns

  * Internal (stuff they need to implement the system itself)

  * Product centric (which of the WH products such as “sports” “tweets”
    “news”.

  * External (“social media sharing”)

  * Customer centric

10:12 Chronos streams connect to the external system and bring it into
Chronos

  Adapter: Understand all the possible protocols that other systems
  implement.  Connect to the other system.

  Converter: Transform the incoming data into their internal format

  Persistence Manager: Make the converted data durable.

11:22 Chronos clustering

  Benefits from the Akka Framework.

  Distributes the streams across the cluster.

  When failover happens, stream connection to outside source is migrated
  to another node via Akka.  Keeps referential transparency.  Each
  stream is an Actor which “supervises” its children: adapter,
  converter, persistence manager. 

12:41 Supervising (Slides diverged from audio) (Slide 12)

  Supervision is key to allowing “error kernel
  pattern”. <http://danielwestheide.com/blog/2013/03/20/the-neophytes-guide-to-scala-part-15-dealing-with-failure-in-actor-systems.html>

    Basically, it is just a simple guideline you should always try to
    follow, stating that if an actor carries important internal state,
    then it should delegate dangerous tasks to child actors, so as to
    prevent the state-carrying actor from crashing. Sometimes, it may
    make sense to spawn a new child actor for each such task, but that’s
    not a necessity.

    The essence of the pattern is to keep important state as far at the
    top of the actor hierarchy as possible, while pushing error-prone
    tasks as far to the bottom of the hierarchy as possible.

  Embrace failure as part of the design.  Connections are not resilient.

14:08 They have extended Akka cluster to allow for need based elastic
  redistribution

14:20 First mention of Apache Kafka message broker.
  This looks like a good article about the origin of Kafka:
  <http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying>.)

  Fates organizes incidents recorded by Chronos into timelines, grouped
  by categories.  Can also create “Views” as an aggregation of timelines
  or other views.

15:56 (Slide 15)

  More details on Timeline: history, sequence, order, of events from
  Chronos.

16:21 (Slide 15)

  Customer timeline example. 

17:16 (Slide 15)

  First mention of Cassandra (18:11).  Use this as their NoSQL impl.

18:37 (Slide 16)

  More details on how Fates uses Cassandra.

  This is where they define the schema.

18:42

  Every timeline category has a table, named <TimelineCategory>_tl.

  Key definition is most important to enable fault tolerance and
  horizontal scaling.  Key is

  ( (entityId, Date), timestamp)

19:23

  If that had chosen the entityId as a partition, this would not have
  been a good choice because customers are going to want to do things
  with the entities.  This would result in an unbalanced cluster: some
  nodes wold contain much more data than others.  Throwing in the date
  and timestamp lets the data fan out over time.  Every day they define
  a new key.

20:19 (Slide 17) Views

  Views are built by jobs.  Want to do machine learning and logical
  reasoning.

  Want to distinguish between deduction, induction and abduction

  Deduction: the cause if the event.  If it’s raining, the grass will be
  wet.

  Induction: Not the strict mathematical definition!  A conclusion
  performed after several observations.

  Abduction: When your deduction is correct.  For example if we have
  several customers that watch matches from the Liverpool team, then we
  can conclude that they are supporters of Liverpool.

  <https://www.butte.edu/departments/cas/tipsheets/thinking/reasoning.html>

22:35 (Slide 18, 19)

  Neo Cortex (Speed layer)

  Nothing more than library built on Apache Spark. 

22:56 (Slide 19) First mention of Microservices.

  He said Neo Cortex was an ease of use layer that allows their
  developers to create microservices on top of the omnia platform.

  Use the distributed nature of Spark, while hiding the complexity of
  interacting with the other subsystems.  Fast and realtime.

  Looks like this is where their domain experts (data scientists) work.
  Lots of terms from statistics in this section.  “Autoregressive
  models” “Monoids” “Rings”.  Looks like the Breeze framework
  <http://www.getbreezenow.com/> was mentioned here.

24:14 (Slide 19)

  Essentially, what we want with NeoCortex provide the building blocks
  for their data scientists to generate recommendations, identify fraud,
  optimize customer experience, etc.

24:35 (Slide 20)

  Scala code for one of their microservices.

  Note, this doesn’t seem to need to be in Scala now that Java SE 8 has
  Lambdas.

  He mentions use of Observable (line 12), but interestingly does not
  mention use of ReactiveX
  <http://reactivex.io/documentation/observable.html>

27:00 (Slide 21)

  How Spark runs the code from slide 20.

    Map allows them to leverage the power of parallelism.  Lambda in Map
    function is performed by all the nodes in the cluster in parallel.

    ReduceByKey still has parallelism (28:50). Process the Desktop and
    Mobile channels in parallel (for example).  Because the parallelism
    is reduced, this is going to be the most expensive of the processes
    in slide 20.

29:04 (Slide 21, 22, 23) Hermes

  Still referring to slide 20 code, he points that every single lambda
  of that thing is running on different nodes, and in parallel!  This is
  what Neo Cortex does.  It understands very well spark, rdb partitions,
  parallelism in Spark.

  30:40 Simple full duplex communication for the Web. 

  Data as API

21:10 (Slide 24)

  Hermes distributed cache

  Hermes JavaScript framework.  Allows their developers to interact
  without leaving the domain.  “We want happy web developers”

  32:09 Mentions use of JSON Path in order to have a graph which can
  fully represent the domain model.

  32:33 Hermes is responsible for caching in the web browser the
  information relevant for the page.

  32:39 The Hermes Node component (not node.js based?) mediator between
  the two worlds. 

  34:12 Dispatcher, one of the most important components.  If there is a
  lot of data heading to the client, but we know the client doesn’t
  really need all of it, dispatcher will ensure only the last one gets
  delivered.  Batches and optimizes network communication.

  35:07 this is what differentiates Hermes from similar frameworks.  It
  starts to be proactive, rather than reactive!  It enables prediction
  based on user preferences.

36:09 (Slide 25) Infrastructure

  36:21 Mesos usage.  (Slide 26) “Game changing.  Slide 26 shows how IT
  development changes in the last 20 years.”  It used to be a mainframe
  with lots of nodes.  Because Moore’s law ending, the world changed the
  other way around.

  37:51 Use of Marathon.  A REST API built on Mesos to provide
  elasticity to scale up and scale
  down. <https://mesosphere.github.io/marathon/docs/>.

  38:04 Docker

  “It can be considered the same concept we have seen with the Actor
  before.  The error kernel pattern we had in the actor model, and the
  supervisor mechanism is a nice concept of failure.  If I have to fail,
  I want to fail in isolation.  For this reason, every single component
  of the Omnia platform should run inside a Docker container.  Lets them
  contain failure.

38:49 (Slide 27)

  Example as Omnia is domain agnostic.

  Each part of Omnia is provided with a JMX monitor (IMHO this is the
  secret sauce).  Through Chronos, we can create a stream whose source
  is the JMX data!  We have an observable that shows the health status
  of the whole platform.  Through Fates we have stories about the
  system.  Through NeoCortex, we can become aware that we need more
  resources at certain times, for example around the schedule of footbal
  matches.

41:16 Oliver takes questions

  Is it available for external use?  They are looking to open source it
  in the long term.

  42:23 Technology votes

  Why didn’t you choose Akka streams at first?  Any tips on adopting
  this technology.  Neo Cortex uses Spark core and spark streaming. 

  44:22 In addition to Cassandra, are you using any other big data
  storage for fates?  No.  They looked at several others, but Cassandra
  was a perfect fit.  It also has a good integration with Spark.

  45:12 Any problems with persistence and Mesos?  Mesos is usually
  mentioned in the context of persistent processing.  Yes.  They are
  exploring how to do this.  Their Cassandra cluster is not yet
  integrated into Mesos. 

  45:58 Loss of speaker.  50:59.  Cassandra has enough already without
  putting it into Mesos.

  47:38 How many people and how long did it take to have this ready for
  production?  Omnia is not yet in production!  It’s part of the
  research job they are doing at WH Labs.  Four engineers.  Staged
  delivery.  He didn’t say how long. 

  48:34 is there a danger of using Omnia to monitor Omnia?  Not
  something they will introduce little by little.  Don’t see too much
  danger of that.

  49:32 Have you considered using stream processing frameworks like
  samsa or storm?  What is the difference between these and what you
  use?  He likes Samsa, it fits with Kafka.  He found Spark streaming
  better suited to distributed.  Better semantics.  More functional
  approach.

  51:52 Are you using public or private cloud? Private at this moment.
  Reason: data sensitivity, legal framework.

  52:36 Any thoughts on how Akka persistence compares to your
  persistence stack?  They are using Akka persistence.  He didn’t talk
  about the fact that data is represented as a graph using the Actor
  model system.  Implemented using akka persistence on top of Cassandra.

  53:32 What can you advise regarding career opportunities with Akka,
  Play, Scala?  With the coming of these highly parallel systems, we
  need to find a different way of programming.  This is why he likes the
  reactive manifesto.  They are a JVM house.  They used to be a Java
  house. Since they started to adopt Akka for referential integrity, he
  doesn’t see much future in Spring or Java EE.


Using Apache Spark DataFrames for Processing of Tabular Data

This post will help you get started using Apache Spark DataFrames with Scala on the MapR Sandbox. The new Spark DataFrames API is designed to make big data processing on tabular data easier. A Spark DataFrame is a distributed collection of data organized into named columns that provides operations to filter, group, or compute aggregates, and can be used with Spark SQL.
https://www.mapr.com/blog/using-apache-spark-dataframes-processing-tabul...



The Curious Case of the char Type

It's been almost twenty years that Gary Cornell contacted me to tell me “Cay, we're going to write a book on Java.” Those were simpler times. The Java 1.0 API had 211 classes/interfaces. And Unicode was a 16-bit code. ♦

Now we have over 4,000 classes/interfaces in the API, and Unicode has grown to 21 bits.

The latter is an inconvenience for Java programmers. You need to understand some pesky details if you have (or would like to have) customers who use Chinese, or you want to manipulate emoticons or symbols such as 'TROPICAL DRINK' (U+1F379). In particular, you need to know that a Java char is not the same as a Unicode “code point” (i.e. what one intuitively thinks of as a “Unicode chracter”).

A Java String uses the UTF-16 encoding, where most Unicode code points take up one char value, and some take up two. For example, the tropical drink character, erm, code point is encoded as the sequence '\uD83C' '\uDF79'.

So, what does that mean for a Java programmer? First off, you have to be careful with methods such as substring. If you pass inappropriate index values, you can end up with half a code point, which is guaranteed to cause grief later. As long as index values come from a call such as indexOf, you are safe, but don't use str.substring(0, 1) to get the first initial—you might just get half of it.

The char type is now pretty useless for application programmers. If you call str.charAt(i), you might not get all of the code point, and even if you do, it might not be the ith one. Tip: If you need the code points of a string, call:

int[] codePoints = str.codePoints().toArray();

I recently finished the book “Core Java for the Impatient”, where I cover the “good parts” of Java, for programmers who come from another language and want to get to work with Java without sorting through twenty years of historical baggage. In that book, I explain the bad news about char in somewhat mindnumbing detail and conclude with saying “You probably won’t use the char type very much.”

All modesty aside, I think that's a little better than what the Java tutorial has to offer on the subject:

  • char: The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive).

Uffff. What is a “single 16-bit Unicode character”???

A few days ago, I got an email from a reader who had spotted a somewhat unflattering review of the book in Java Magazine. Did the reviewer commend me on giving readers useful advice about avoiding char? No sir. He kvetches that I say that Java has four integer types (int, long, short, byte), when in fact, according to the Java Language Specification, it has five integral types (the last one being char).

That's of course correct, but the language specification has an entirely different purpose than a book for users of a programming language. The spec mentions the char type 113 times, and almost all of the coverage deals with arithmetic on char values and what happens when one converts between char and other types. Programming with strings isn't something that the spec cares much about.

So, it is technically true that char is “integral”, and for a spec writer that categorization is helpful. But is it helpful for an application programmer? It would be a pretty poor idea to use char for integer values, even if they happen to fall in the range from 0 to 65535.

I like to write books for people who put a programming language to practical use, not those who obsess about technical minutiae. And, judging from Core Java, which has been a success for almost twenty years, that's working for the reading public. I'll raise a glass of 'TROPICAL DRINK' (U+1F379) to that!



SIP Servlet 2.0 and CDI

SIP Servlet 2.0 makes it possible to use CDI with SIP Servlet applications. It supports SIP Servlet POJOs as component classes that qualify as CDI managed beans. It also defines SIP specific CDI beans and scope types. Lets explore each of them.

SIP Servlet POJOs qualify as CDI managed beans

With this, now it is possible to inject CDI beans into SIP Servlet POJOs making all features of CDI available to SIP Servlet applications. Note that the lifecycle of the SIP Servlet POJOs are still managed by the SIP container just like other component classes defined in Java EE specification. It also applies to SIP listeners and regular SIP Servlets.

SIP specific built-in beans

There are five SIP specific built-in beans specified as listed below.

  • javax.servlet.sip.SipFactory
  • javax.servlet.sip.SipSessionsUtil
  • javax.servlet.sip.TimerService
  • javax.servlet.sip.SipApplicationSession
  • javax.servlet.sip.DnsResolver

These objects, which are otherwise familiar to SIP Servlet developers, can now be injected into the a SIP Servlet using @Inject.

SIP specific CDI scopes

  • @SipApplicationSessionScoped
  • @SipInvocationScoped

There are two standard scope types defined. When a CDI bean is of SipApplicationSession scope, the lifecycle of that bean will be bound to a SipApplicationSession. With this, applications can be developed without having to recreate state objects from attributes saved in SipApplicationSession. The lifecycle of the bean will be managed by the container. Given that containers usually manage concurrency and availability at the level of SipApplicationSession, this scope becomes an important feature.

Similarly Lifecycle of an object with SipInvocationScope is tied to the invocation of a SIP Servlet POJO or any listener.

Here is an example of a bean which is SipApplicationScoped..

@SipApplicationSessionScoped
public class MyProxy implements Serializable {

  private long startTime;

  public void forward(SipServletRequest req) throws Exception {
    SipURI uri = (SipURI) req.getRequestURI().clone();
    req.setRequestURI(uri);
    Proxy p = req.getProxy();
    p.proxyTo(uri);
    startTime = System.currentTimeMillis();
  }

  public void subsequentRequest() {
    System.out.println("Total elapsed time is " +
      (System.currentTimeMillis() - startTime));
  }
}

Also, see how a POJO uses it. Note that an instance of myProxt will be is created for each call by the container.

@SipServlet
public class SipHandler {

  @Inject MyProxy myProxy;

  @Invite
  public void onInvite(SipServletRequest request)
    throws Exception {
    myProxy.forward(request);
  }

  @AnyMethod
  public void onRequest(SipServletRequest request)
    throws IOException {
    myProxy.subsequentRequest();
  }

}

Hope you find this useful.



GeekOut 2015 Summary

GeekOut 2015 Summary

I last had the pleasure of visiting the lovely Baltic city of Tallinn in 2012, when I presented JSF 2.2 and the Rockstar talk at GeekOut 2012. Now that I've got something new (for me anyway) to talk about I made the cut and was invited back to present Servlet 4.0 at GeekOut 2015. Attendence was capped at 400, giving this conference a very exclusive feel. Indeed, 99% of those that registered for the conference actually did attend. This was the 5th installment of the GeekOut conference, hosted by ZeroTurnaround. This was the first time the conference had two tracks, so my report here only covers the sessions I actually attended. All of the sessions were video recorded, and I expect the sessions will be made available soon.

Day One

Day one started with back-to-back plenary sessions offering two different perspectives on the #java20 theme. Stephen Chin gave a historically rich but technically light session featuring lots of freshly recorded video clips with Java luminaries. Of course there was ample content from James Gosling, who I would like to congratulate for winning the 2015 IEEE John von Neumann Medal. This puts James in the company of such titans as Leslie Lamport, Donald Knuth, Ivan Sutherland, and Fred Brooks. I was happy to see that Stephen dove deeper and offered the perspectives of John Rose and Georges Saab on more fundamental aspects of the history of Java. Martin Thompson followed Steven with a very complimentary session. The session was so complimentary I'd almost say they coordinated. Martin's session gave his personal perspective on Java over the years, with some very interesting stories from his work on making Java perform well. I liked his perspective on the causes and challenges of bloat in a long-lived software ecosystem. Another very interesting perspective was the extent to which high frequency trading drives advances in performance (in Java and in the entire industry). Martin's talk piqued my desire for a #java20 talk about all the companies that have been spawned directly or indirectly by the Java ecosystem. I'm thinking Interface21, Tangosol, JBoss, NoFluffJustStuff, ZeroTurnaround, Atlassian, Parleys and there are many others. Hey, I'm pretty sure there's an interesting talk in there somewhere.

After the plenary sessions, we broke out into the two tracks, starting with my session on Servlet 4.0 and a session on Cassandra. My session was quite well attended, and it went pretty smoothly. We'll see how the feedback shows up, however! After my session, I went down to see Markus Eisele talk about Apache Camel. I hadn't followed the progress in the Camel community and I'm happy to see it is doing well since. Also nice to see my old pal Gregor Hohpe represented virtually, as his book is represented in spirit in Camel itself.

I was very keen to see the Vaadin talk from Peter Lehto. I had long been perplexed at Vaadin's ability to decouple itself from GWT, particularly as GWT's popularity has dwindled. This talk, at last, promised to lay bare the secret at the heart of Vaadin: its runtime is dependent on GWT. I was not disappointed, but I was also very pleasantly surprised. Mr. Lehto directly addressed the question of the relevance of server side UI frameworks, including Vaadin (and JSF, though he didn't name it specifically) in an HTML5 JavaScript framework world. He did so by pointing out the importance of abstraction, which I've long been pointing out when presenting on JSF. In the case of Vaadin and JSF, their core value add is the authoring experience. With Vaadin, it's Java programmers who want to treat the world like Swing. With JSF it's "page developers" who want to treat the world like some form of VisualBasic environment. For Vaadin, its existing abstraction allows their underlying runtime to leverage W3C Web Components (or the polymer implementation of the same) for some Vaadin components while relying on GWT for others. Peter put a strong stake in the ground and predicted that W3C Web Components are the future for web development. I don't disagree, but JSF is well positioned to leverage W3C Web Components because it fits in nicely with the JSF abstraction.

Day Two

Day two started out with Attila Szegedi's highly technical Rhino talk. This was the first talk of the day, after the party night, so it was a little lightly attended. However, those that made it there were rewarded with an in-depth understanding of the rationale for some performance related design decisions in the implementation of Nashorn.

The 10:30 slot was another effectively plenary session, but out in the demo area. Stephen Chin's highly effective NIGHTHACKING brand came to GeekOut with a panel discussion on the #java20 theme. The video is on the NIGHTHACKING website. This was a lot of fun, and I got to put my Javagator old-timer test out there. I also had the pleasure of a brief chat with Stephen regarding JSF 2.3 and Servlet 4.0.

I was really looking forward to Tomasz Nurkiewicz's session about CompletableFuture, particularly because of its use in the Java SE9 HTTP/2 client. Thomas managed to pack a whole lot into a short, well constructed, code powered session. It's not easy to explain the differences between thenApply(), thenCombine(), thenCompose() and many other methods in the API, but Tomasz succeeded. He even surfaced an important naming inconsistency between the CompletableFuture API and the java.util.Optional API: thenCompose() == flatMap(). For more on this topic from Tomasz, check out his blog entry The Definitive Guide to Completable Future. Personally, I think it's a bit bold to give a single blog entry such a lofty title, but you can't argue that it does indeed cover the topic very well. I meant to ask Tomasz if his code samples from the talk were taken from an upcoming book. Tomasz, if you happen to see this little blog entry, please plug the book if there is one.

I had high hopes for the next talk, Gleb Smirnov's concurrency talk. It was probably a great talk, but sadly this is when my jetlag hit hard and I was struggling to keep up. I'll look for the video!

I took a pass for the 15:00 slot due to the afore mentioned jetlag and opted to save my energy for one final session, Andrzej Grzesik's Go. I'd taken a quick look at Go before the session, so I was in a good position to enjoy it. This session made no excuses about having nothing to do with Java and instead just tried to give a quick tour of the Go language and programming environment, with a view towards lowering the barriers to entry to give it a try. Go succeeds because it rules several fundamental things as simply out of scope. There is no dynamic linking. There is no UI. There is no API to the threading model. There is no inheritance. I'm glad Go is out there because sometimes you don't need that stuff. For what it's worth, here's a nice post on Go from the Docker guy.

Finally, there were some brief and tasteful closing remarks from ZeroTurnaround founders and my good friends Jevgeni Kabanov and Toomas Römer. I'm glad to see these guys doing well.



Automating Deployment of the Summit ADF Sample Application to the Oracle Java Cloud Service

Automating Deployment of the Summit ADF Sample Application to the Oracle Java Cloud Service




All brand names,logos and trademarks in this site are property of their respective owners.

-  Free Magazines


Free Magazine
-  News
Java/J2EE/J2ME
  java.sun.com
  TheServerSide
  Wireless Java
  Javable
.NET
  MSDN
Certification
  CertCities
  MCPMag
Industry News
  CNET News
  CNET E-Business
  CNET Enterprise
  InfoWorld
  eWeek
  WiredNews
-  Weblogs
JavaBlogs
James Gosling's
-  Tell A Friend
Tell others
Free eBooks |  About |  Disclaimer |  Terms Of Use |  Privacy Policy
Copyright 2001-2006 Gayan Balasooriya.   
All Rights Reserved.