This tutorial will help you get started with Standalone Spark applications on the MapR Sandbox.
Hard on the heels of JavaLand was ConFESS. This was the eighth installment of the conference that started life in 2008 as JSF Days, switching to the name "ConFESS" in 2011. The name stands for the "Conference for Enterprise Software Solutions". Last year, ConFESS was held as a partnership with JavaLand in Brühl Germany. Neither party was satisfied with how that turned out and in 2015 ConFESS returned to its home in Vienna, where it will stay. It was a relatively small event, with just over 200 participants. It nicely filled out the venue, the C3 event center in the 3rd district. In my opinion, its small size is a large asset. The ability to have the entire event schedule on two sides of a 4x5 inch card is very convenient.
My overall impression of the conference was very positive. There was a wide variety of talks from speakers I hadn't seen before on the conference circuit. There was a good breadth of coverage in diverse tracks ranging from agile/methods to Java EE to tools to client side technologies, and there was an excellent band on Tuesday night, Florian Braun and FSG Company. There was also a Lego Mindstorms EV3 competition that got rave reviews, but I didn't attend that portion of the event.
The full set of abstracts from the conference are available at the regonline site for the event. You can use that site to learn more about the sessions for which I will give my brief impressions in the remainder of this blog entry.
The Tuesday Keynote was from Oracle Labs's Thomas Wuerthinger. Thomas presented his exciting work on the Graal VM. First off, I'm glad to see that Oracle has continued Sun's tradition of funding long-term research in the spirit of Sun Labs, founded by computing pioneer Ivan Sutherland (yep, just checked, he still works for Oracle). The basic idea of Graal appears to be: take the abstract syntax tree concept from compiler design and make it a first class part of the JIT process, allowing the runtime to rewrite itself as the program runs to achieve greater performance without sacrificing agility. Cool stuff, and great for a keynote.
Sticking with the JSF heritage of the conference, next up was Cagatay Civici's talk about PrimeFaces. Cagatay introduced the new "layouts" concept, built on JSF 2.2 Resource Library Contracts. The base offering consists of two new layouts, Sentinel and Spark. One thing I've always liked about PrimeFaces is how they take the base concepts of the core JSF specification and use them to maximum effect, taking full advantage of new features, large and small.
Diving down a level, Johannes Tuchscherer from CloudFoundry talked about Docker and how it relates to offerings from Pivotal. Johannes put the hype into perspective, showing how you still need other technologies to actually create value with Docker.
Sticking in the Pivotal realm, Jürgen Höller gave the Spring 4.1 overview talk. It was nice to see that they were able to leverage Java SE 8 features while producing a binary that runs on Java SE 6. I was happy to have the opportunity to ask Juergen how pulled that trick off and the answer is basically build-time static code analysis. They compile with Java SE 8 with -source and -target 1.6, and have a build-time tool that looks for usages of Java SE 8 only idioms and APIs, and flags them as failures.
The next talk I attended was a really practical hands on session about Java Flight Recorder from Johan Janssen. I'm a big fan of learning to get more out of tools I already have. JFR has been a part of the JDK for quite some time.
I was happy to see my good friend and fellow Oracle employee Mike Keith given the Wednesday keynote slot. Mike is a veteran of the conference trail, author of Pro JPA2 and former JPA spec lead. Mike was talking about an exciting new product from Oracle: Mobile Back End As A Service (MBaaS). In a nutshell, this product packages up everything enterprises need to deploy mobile based applications that are built on their existing infrastructure. Mike's slides are available for download.
My own session was up next, at which I gave a status update on JSF 2.3. Briefly, it's a community driven release aimed at preserving your existing investment in JSF. I've uploaded my slides to slideshare.
As a counterpart to Johan Janssen's session yesterday I attended Anton Arhipov's session about ZeroTurnaround's XRebel product. I liked his straightforward pitch: most applications receive very little profiling attention, let's make a super simple product that lets you get the low hanging fruit with maximum performance gain. Indeed, the slick browser based UI is very easy to use. When asked about various corner cases, Anton was honest and answered the current state of the product is very narrowly focused on where the most value can be easily extracted. This focus is a key success factor for ZT, in my opinion.
I've always talked up the importance of maintainability, and sold that as a strong suit of the Java EE stack, so it was with great interest that I attended Bernhard Keprt's session about maintenance. One reason I like attending conferences is to remove my 3rd order ignorance by exposing me to technologies I otherwise would not encounter. During Bernhard's talk, he introduced me to VersionEye. The value-add of this tool is easy to perceive: given that you have lots of dependencies, let's have a tool that keeps an eye on them and lets you know when they update.
Stefan Schuster gave a session from his experiences in developing apps for the three big flavors of mobile deployment platforms: native, Apache Cordova, and mobile web app. I liked this session for its first-hand perspective.
To close out my 2015 ConFESS session attendance I viewed Alex Göschl's session on AngularJS. Alex shared his experiences in deployng Angular 1 for the jobs portal for conference sponsor Willhaben. FWIW, I found nine job postings for JSF on the site and four for Angular. This was an enjoyable talk and Alex did a great job explaining the extremely heterogeneous set of tools and technologies used in the project. Prior to switching the jobs portal to Angular 1, they were using GWT. It was pretty much a complete rewrite. The most useful aspect of the talk to me was the ease with which such an apparently complex tool chain is now accepted and leveraged by your average front end team. For example, the following nine step dev time build process was rattled off as if it were no big deal.
clean build targets
compile less to css
copy vendor libraries
compile and optimize angular templates
compile and check typescript
copy to tomcat
inject velocity templates
It must just be my Java EE roots that makes me feel that the preceding list is a lot more complex than a similar build process in a Java EE stack. I need to spend more time getting to know the workflow in current front end shops. Can anyone recommend a user group or meetup in Orlando, FL?
Following the two day conference was my full day of workshops. I had a small but dedicated room of students and I hope they enjoyed the sessions.
You might have heard some folks in the JavaEE community scream "Glassfish is dead!"
As I work on 2 technologies that are going to end up in JavaEE 8 I can say that for me that is certainly not the case.
Certainly we do not do a lot of "official" releases of the RI implementation of JavaEE, but does that mean it is dead?
Not in the slightest! For Mojarra and Ozark we run integration builds using the daily builds of Glassfish!
Do I see a lot of breakage running against daily builds? No!
So if you want a more recent "release" of Glassfish just use a daily build.
After all what is a version number? You need 4.x or can it just be 4.1.YYYYMMDDHHSS ;)
When writing an article about HtmlUnit and Maven integration testing I never expected that article to become as popular as it has.
Most of my blog entries have a modest number of reads, but apparently HtmlUnit integration testing is popular enough to warrant 11,109 reads as of today.
For a technical blog I consider that a good number ;)
For the original blog entry, see https://weblogs.java.net/blog/mriem/archive/2011/12/13/htmlunit-and-mave...
Software is an interesting thing.
We currently live in a very fast paced society where changes seem to come and go. However that is really only true for consumer electronics. Most systems that consumers are hardly aware of run stacks that are a couple to several years old and for those it is not economical to change at the rate consumer electronics does.
Mojarra is a piece in such a stack. As part of our day job we have to maintain a large set of code lines for our customer base. If it sometimes looks that we are not moving forward fast enough just imagine we would have to support your phone for 10 years, sounds crazy right? Well for enterprise grade software that is the reality. Is that bad? Certainly not. Just keep it in mind ;)
So while we keep innovating Mojarra we also have to keep maintaining your older stack. And thus we have to maintain our own stack so we can test Mojarra itself.
As part of a large migration / update we are now happy to report that the several older test pieces of the Mojarra build are now all in the same Maven build structure. To put it in perspective this is the culmination of work that was started in 2012. With this all in place we hope we can be more agile in responding to issues coming from our customers and innovating Mojarra going forward.
For those that want to contribute to Mojarra, please ping me and I'll explain it in more detail.
Recommendation engines help narrow your choices to those that best meet your particular needs. In this post, we’re going to take a closer look at how all the different components of a recommendation engine work together. We’re going to use collaborative filtering on movie ratings data to recommend movies. The key components are a collaborative filtering algorithm in Apache Mahout to build and train a machine learning model, and search technology from Elasticsearch to simplify deployment of the recommender.
This tutorial will describe how a surprisingly small amount of code can be used to build a recommendation engine using the MapR Sandbox for Hadoop with Apache Mahout and Elasticsearch.
The JavaEE 8 process is underway and JSF 2.3 is making progress.
We have just released our 2nd milestone.
See https://javaserverfaces.java.net/2.3/releasenotes.html for the release notes
Download it from https://javaserverfaces.java.net/2.3/download.html
After a short hiatus from blogging, I’d like to show you something exciting today. I can’t take the credit for all of the work - the development was originally started by my son Martin, then picked up by my colleague Jaroslav. I’ve really just added a few finishing touches to make the module releasable. So voilà: I present to you the Google Places module! It’s an integration of Magnolia and Google Maps and Places done a little differently from what you might expect.
All the places you’ll ever need
What you typically want when you deploy Magnolia in your organization is to take maximum advantage of its UI. What you typically get when you ask devs to place a map in your website is a fixed size map, perhaps with some text file or direct html access where you can edit a list of markers. This changes with Magnolia: now you have one pretty Magnolia-style app that your editors can use to load, edit, categorize and sort all the markers they use on one or different maps across their site.
Using marker categories
Every marker can be assigned one or more categories. When putting together a map, you then pull in the relevant category or categories. One marker can therefore be used in multiple maps, depending on how many categories it’s relevant in. This allows the editor to organize the markers according to type, but then re-use them on many different maps according to category. To make that sample little bit more interesting, we’ll deliver the module with set-up markers showing locations of Magnolia’s offices and those of all partners around the world. So for example, if you want to assemble a map with all Magnolia partners, just tell the map to draw in all markers categorized as “Partner”. Boom!
If you already have a list of locations saved in a spreadsheet, save it as an Excel file and get it imported into the app directly.
While you can (and really should) specify exact locations using latitude and longitude for each marker, you can also just leave in the address - the exact position will be retrieved on the fly using the Google Places API. Due to the limitations in the free usage of the API, you should not do that for too many markers, or you should buy unlimited access if you need to do that.
Download, use, contribute
Now for the best part - the module is already released and ready for you to use. Feel free to contribute more functions and improvements to it, if you like it.
Java/Akka based technology models, each of which model a different technology, are active in a distributed Internet community. Any of the technology models may have a definition of a better future version of itself. A technology model that aspires to improve itself engages in conversations with other models in the community; seeking to discover behaviors that are exportable from other technology models which it can integrate into itself to achieve its goal.
Read the project abstract at : http://jcmansigian.webfactional.com/aed-abstract.html
Get the project source and documentation with command:
$ git clone https://github.com/aed-project/aspire aspire-emergent-design
All brand names,logos and trademarks in this site are property of their respective owners.
|Tell A Friend|
Copyright 2001-2006 Gayan Balasooriya.
All Rights Reserved.