Free IT Books, Study Guides, Practice Exams, Tutorials and Software
Thursday, November 27th 2014
-  Free Books
Free MSDN Mags
Free Oracle Mags
Free software CDs
- Certifications
  Exam Details
  Mock exams
  Study guides
  Exam Details
  Mock exams
  Study guides
  Sample chapters
  Exam Details
  Mock exams
  Study guides
  Sample chapters
  Exam Details
  Mock exams
  Study guides
  Sample chapters
  Exam Details
  Mock exams
  Study guides
  Sample chapters
  Exam Details
  Mock exams
  Study guides
  Sample chapters
  Mock exams
  MCSE guides/exams
  Exam Resources
- Java / J2EE
- .NET
Knowledge Base
  .NET Framework
  Visual Studio.NET
- About
Gayan Balasooriya

Broken links?
Suggest good links
To remove links
 weblogs from Javablogs.com

Three Quick Tips for Magnolia App Development

A few weeks ago, I published a script here that helps you generate new Magnolia apps in practically seconds. From the feedback I received, quite a few of you took advantage of it - that’s always good to hear. In what will be hopefully a series of posts, I’ll show you how to change various configurations or extend your apps to achieve a better user experience.

Tip 1: Turn off inline editing

When you generated an app with my script, you might have noticed that when double clicking on the name of any node, it opens for inline editing. While I personally find that very handy for certain types of apps, it’s not suitable for all. If you want to change that behavior, all you have to do is go to your apps, and find the following properties in the Configuration app:




Set them both to false. A piece of cake! Now, when you double click, the item doesn’t open for editing. In fact it doesn’t do anything. That’s probably not so ideal either, so let’s think about what our user wants to do by default. I’d bet that editing the item is probably of interest. Can we enable this somehow?

Tip 2: Enable item editing

Sure, in the Configuration app, find (or add) /modules/blog/apps/blogs/subApps/browser/actionbar@defaultAction and set its value to the name of the action you want to perform as a default, for example editMyItem. If you are not sure what the name of the action should be, have a look at /modules/blog/apps/blogs/subApps/browser/actions. Under this node, you will find all the available actions for your app. All set? Reopen your app and check it out - double clicking on the item name will open said item for editing. All good. You might also notice that the default action is invoked not only on double click but also when simply pressing enter while having items selected in the workbench.

Tip 3: Expand folders easily

Now, imagine an app where you have some hierarchy of folders. That should be quite common, right? Why else would we be using a hierarchical storage for it. All you really want in such an app is to open a folder and reveal the content underneath when double clicking. How can we achieve that?

This operation is normally done by the presenter directly, not by the action - so it will be little trickier. First, we need to create such an action. The implementation of that is quite trivial: get the WorkbenchPresenter instance injected into your action by Guice; in the execute() method call presenter.expand() on the selected item. If you don’t want to code the action yourself, you might want to get one available within these examples here, compiled and added to your instance. To get it working you will also need an appropriate action definition. Compiled? Good, all that remains is for us to add this action to our app. Under /modules//apps//subApps/browser/actions create a new action, say openFolder and set its class property to our action definition class magnolia.app.samples.ui.action/OpenFolderActionDefinition.

You probably also want to set the availability of this action. To do so, create an availability subnode under openFolder. To keep it simple, just add an extends property there and point its value to the availability of the addFolder action by setting it to ../../addFolder/availability. The action is created and set. Now, all we want to do is set it as the default action described above. Done? Cool, let’s go, reopen our app and try double clicking on some folder. Oops. Doesn’t work. To make it worse, there’s an ugly stacktrace in the console. What happened here?

We asked for the presenter to be injected in our action, so Guice tries to oblige. But it does so by creating a new instance of the presenter which is not configured, and definitively not the one our action wants to manipulate. So what now? First, we need to add our workbench presenter instance into list of parameters to choose from when creating the instance of our action. To do so, we customize BrowserPresenter and include the workbench presenter in the list of parameters it passes on. We do so by extending BrowserPresenter and overriding its prepareActionArgs() method. To save you the pain, you can find such an extended presenter here. Good, so our action will get the presenter, now we just need to make sure that our app will use the presenter. Unfortunately this is not something we can configure in the repo, so we need to change the module descriptor of our module (I guess you had to create one by now anyway, even if you didn’t have one originally, to place all the classes we wrote here somewhere). So go ahead - modify your module descriptor to add the following snippet:


If you don’t know where in the module structure the descriptor is, and you used a Maven archetype to build it, look at src/main/resources/META-INF/magnolia/yourmodule.xml. Rebuild your module, redeploy and try out. All works now. Double clicking on folders opens them. Cool. That should be enough for one day.

What’s next?

And, as a little teaser, here are some of the other things that I’ll cover in future blog posts:
- image providers showing thumbnails composed of a mix of images from items in the subfolders
- having different default actions for different kinds of items in the app
- having choose dialogs showing different things when being opened from other apps
- and whatever else i get to hear from you as interesting!

Photo credit: Len Matthews, flickr

Auth ID Overload with domain .id (Indonesia) and Meruvian Yama OAuth2 Server

Indonesia has released the domain .id to public with Indonesia ID, and more more website using this domain, the domain is costly around $50/year.

in Another world, with this domain, we can make the domain become an identity portal.

And yes we are the one that using it (http://www.merv.id), and we also release the OAuth Server, take a look https://github.com/meruvian/yama, a 2-in-1 project that can become an MVC platform and a OAuth Server. Compatible with JENI Education Program (http://www.jeni.or.id).

So, now, starting vocational highschool in Indonesia can learn how to create JavaEE Application (we will move from Struts2/REST to total JAX-RX), and the security we adopt from Spring, and extend to several feature.

The live version of the Meruvian Yama now run under http://www.merv.id and http://www.cybers.id, and hope more more people will use and contribute to make it better.

FYI: The different of the MervID and CybersID are have a more complete profile and a DISC Psychotest Profiling, which both project are a separate project under PAJAJE project (still hosted in SF.NET).

We also create a Yama-showcase as playground to "access" to our Merv.ID, Facebook and G+, we adopt Admin LTE Bootstrap UI.

For Android Client to consume to both Yama News Showcase, called Midas Project Showcase, you can download from Google Play, or direct to his URL https://play.google.com/store/apps/details?id=org.meruvian.midas.showcase

The source code of the MiDas Project in https://github.com/meruvian/midas-droid

So, if you want to create an authentication server, we are glad to help, and we are welcome any feedback, just contact me frans @ meruvian.com

Screenshot_from_2014-11-23_135752.png1.11 MB
yama.png51.68 KB
yama-api.png82.59 KB
yama-news.png62.23 KB
mervid.png76.95 KB

Welcome to Docker

By Jeff Nickoloff, Docker in Action

Save 39% on Docker in Action with discount code dockerjn14 at manning.com.

If you are anything like me, you prefer to do only exactly what is necessary to accomplish an unpleasant or mundane task. It is likely that you would prefer tools that are simple to use to great effect, than those that are complex or time consuming. If I’m right, then I think you’ll be interested to learn about Docker. Launched in March of 2013, Docker is still a new technology. Most technologists have yet to work with it, and fewer have integrated it with their daily activities. But that is changing, and you might want to be next.

If you are like a lot of people, you might have already heard some things about Docker, but are not sure it is right for you or your organization. Maybe you get the impression that this is a fad technology. Before you can make that claim, I’d recommend that you try it. You’re likely to be as surprised as I was.

At the moment, Docker only works with Linux software but you can use Docker and run all of the examples in this book on Linux, OSX, and Windows thanks to a utility called Boot2Docker.

Figure 1 Boot2Docker lets you run Linux applications on OSX and Windows.

Suppose you like to try out new Linux software but are worried about running something malicious. Running that software with Docker is a great first step in protecting your computer because Docker helps even the most basic software users take advantage of powerful security tools.

If you are a system administrator, making Docker the cornerstone of your software management toolset will save you time and let you focus on high value activities because Docker minimizes the time that you will spend doing mundane tasks.

If you write software, distributing your software with Docker will make it easier for your users to install and run. Writing your software in a Docker wrapped development environment will save you time configuring or sharing that environment, because from the perspective of your software every environment is the same.

Suppose you own or manage large-scale systems or data centers. Creating build, test, and deployment pipelines is simplified using Docker because moving any software through such a pipeline is identical to moving any other software through.

What is Docker?

Docker works with your operating system to package, ship, and run software. You can think of Docker like a software logistics provider. It is currently available for Linux-based operating systems but that is changing fast. Either software authors or users can apply it with network applications like web servers, databases, and mail servers; terminal applications like text editors, compilers, network analysis tools, and scripts; and in some cases it is even used to run GUI applications like web browsers and productivity software. Docker will have new uses as operating systems grow to offer new features. Having help with software logistics is more important than ever because we depend on more software than ever. Docker is not a programming language, and it is not a framework for building software. Docker is a tool that helps solve common problems installing, removing, upgrading, distributing, trusting, and managing software.

Docker is open source, which means that anyone can contribute to it and it has benefited from a variety of perspectives. It is common for companies to sponsor the development of open source projects. In this case, Docker Inc is the primary sponsor. You can find out more about Docker Inc. at https://docker.com/company/.

What Problems?

Every week I read a few stories about difficulties installing, upgrading, removing, distributing, trusting and managing software. Some can be particularly horrific describing wasted time, frustrations, and service outages. I have personal experiences where I tried to install software for up to eight hours before giving up and finding alternative solutions.

Software installation experiences usually fall into one of two categories. Either an installation program hides everything that it is doing to install a program on your computer, or the software comes with complicated instructions. In either case, installing software will require several changes to your computer. Worst-case scenarios happen when two programs cannot run on the same computer forcing a user to make tradeoffs.

Upgrading installed software introduces an opportunity for the same incompatibilities you encounter during installation. Some tools exist for resolving those conflicts, but they are often domain specific. Assumptions that software authors have to make about where users will install their work make software distribution just as challenging. While software authors want their work to reach the broadest possible audience, real world considerations like time and cost limit that audience.

People and institutions that suffer most from software problems deploy software to several computers. As the scale of the deployment increases so do the general complexity of the software problems. Every piece of software and every computer introduced multiply that complexity. Trust issues are the most difficult problems to solve. Even if you trust the source of your software, how can you trust it not to break under attack? Building secure computing environments is challenging and out of reach for most users.

How Does Docker Solve the Problems?

Using software is complex. Before installation you have to consider what operating system you're using, the resources the software requires, what other software is already installed, and what other software it depends on. You need to decide where it should be installed. Then you need to know how to install it. It’s surprising how drastically installation processes vary even today. The list of considerations is long and unforgiving. Installing software is, at best, inconsistent and over complicated.

Most computers have more than one application installed and running. And most applications have dependencies on other software. What happens when two or more applications you want to use do not play well together? Disaster. Things are only made more complicated when two or more applications share dependencies:

  1. What happens if one application needs an upgraded dependency but the other does not?
  2. What happens when you remove an application; is it really gone?
  3. Can you remove old dependencies?
  4. Do you remember all of the changes you had to make to install the software you want to remove?

The simple truth is that the more software you use, the more difficult it is to manage. Even if you manage to spend the time and energy to figure out installing and running applications, how confident can anyone be about their security? Open and closed source programs release security updates continually and just being aware of all of the issues is often unmanageable. The more software you run, the greater the risk that it is vulnerable to attack.

All of these issues can be solved with careful accounting, management of resources, and logistics. Those are mundane and unpleasant things to deal with. Your time would be better spent using the software that you are trying to install, or upgrade, or publish. The people that build Docker recognized that, and thanks to their hard work you can breeze through the solutions with minimal effort in almost no time at all.

It is possible that most of these issues seem acceptable today. Maybe they even feel trivial because you’re used to them. After reading how Docker makes these issues approachable, you may notice a shift in your opinion.

Figure1.jpg21.69 KB

An introduction to technical debt from Re-Engineering Legacy Software by Chris Birchall

By Chris Birchall, Re-Engineering Legacy Software

Save 39% on Re-Engineering Legacy Software with discount code relegjn14 at manning.com.

Every developer is occasionally guilty of writing code knowing it’s not perfect, but is good enough for now. In fact, this is often the correct approach. As Voltaire wrote,

“Le mieux est l'ennemi du bien.” (Perfect is the enemy of good.)

In other words, it is often more useful and appropriate to ship something that works than to spend excessive amounts of time striving for a paragon of algorithmic excellence.

However, every time you add one of these “good enough” solutions to your project, you should plan to revisit the code and clean it up when you have more time to spend on it. Every temporary or hacky solution reduces the overall quality of the project and makes future work more difficult. If you let too many of them accumulate, eventually progress on the project will grind to a halt.

Debt is often used as a metaphor for this accumulation of quality issues. Implementing a quick-fix solution is analogous to taking out a loan, and at some point this loan must be paid back. Until you repay the loan by refactoring and cleaning up the code, you will be burdened with interest payments, i.e., a codebase that's more difficult to work with. If you take out too many loans without paying them back, eventually the interest payments will catch up with you and useful work will grind to a halt.

For example, imagine your company runs InstaHedgehog.com, a social network in which users can upload pictures of their pet hedgehogs and send messages to each other about hedgehog maintenance. The original developers did not have scalability in mind when they wrote the software, as they only expected to support a few thousand users. Specifically, the database in which users' messages are stored was designed to be easy to write queries against, rather than to achieve optimal performance.

At first, everything ran smoothly, but one day a celebrity hedgehog owner joined the site, and InstaHedgehog.com's popularity exploded! Within a few months, the site's userbase had grown from a few thousand users to almost a million. The DB, which wasn't designed for this kind of load, started to struggle and the site's performance suffered. The developers knew that they needed to work on improving scalability, but achieving a truly scalable system would involve major architectural changes including sharding the DB and perhaps even switching from the traditional relational DB to a NoSQL data store.

In the meantime, all these new users brought with them new feature requests. The team decided to focus initially on adding new features, whilst also implementing a few stop-gap measures to improve performance. This included adding a couple of DB indexes, introducing ad-hoc caching measures wherever possible, and throwing hardware at the problem by upgrading the DB server. Unfortunately, the new features vastly increased the complexity of the system, partially because their implementation involved working around the fundamental architectural problems with the DB. The caching systems also increased complexity, as anybody implementing a new feature now had to consider the effect on the various caches. They led to a variety of obscure bugs and memory leaks.

Fast forward a few years to the present day, and you are charged with maintaining this behemoth. The system is now so complex that it's pretty much impossible to add new features, and the over-complicated caching systems are still regularly leaking memory. You've given up on trying to fix that, opting instead to restart the servers once a day. And it goes without saying that the re-architecting of the DB was never done, as the system became complex enough to render it impossible.

The moral of the story is, of course, that if the original developers had only tackled their technical debt earlier, you wouldn't be in this mess. It's also interesting to note that debt begets more debt. Because the original technical debt (the inadequate DB architecture) was not paid off, the implementation of new features became excessively complex. This extra complexity is itself technical debt, as it makes the maintenance of those features more difficult. Finally, as an ironic twist, the presence of this new debt makes it more difficult to pay off the original debt.

Go: A practical introduction to slices + 39% savings

By Erik St. Martin

Slices are one of the many exciting features of Go that give it the same feeling as a dynamic language. They allow you to have a dynamically sized array, pass arrays by reference, and have multiple array-like types that are all supported by the same underlying array. All of this leads to solving problems in a much more efficient way, and creating less garbage for the garbage collector to clean up.

So what is a slice? A slice is like a window into an array, and like windows on a house, they all give you a view into the same house, but depending on which window you’re looking through, you may be looking at a different part of the house.

If you come from a language like Perl working with slices will be very familiar, with the exception that in Go slices share the same underlying array.

Let's take a look at how a slice looks. Assuming that we have an array of four users (Tim, Joe, Bob, and Tom), we can create two slices -- one of the first two users, the other of the last two users. Figure 1 demonstrates what it might look like:

Figure 1

The two slices work just like normal arrays, except their indexes are relational to where they exist in the current slice and not the original array, and any changes made to an element on either side is seen by the other, because the data is actually stored in an array shared by both slices.

We're going to continue on with our document versioning example, and how we can use slices to combat our second problem from the arrays section -- checking to see if the version we're adding surpasses the upper bounds of the version array. We need to account for an unknown number of versions, our example from the arrays section was limited to the size of the array we defined in the code. First let's take a look at the various ways that we can create slices.

Slicing an array or another slice

We can slice an array or another slice by using the [] syntax; if you worked with slices in Perl then this will be very familiar to you. Slicing an array or another slice is a nice convenience when you only want to work with a subset of a larger data set. First we'll look at an example, then we'll walk through how this works.

users := [4]string{"Tim", "Joe", "Bob", "Tom"} 
top := users[0:2] // [Tim Joe]

The first number inside the slice brackets represents the starting element, the second represents the ending element. This is exclusive, so it does not include the ending index.

// Create a new variable top, containing the first 2 users 
top := users[0:2] // [Tim Joe]

You can omit either index, in order to have the starting or ending index determined for you. When you omit the starting index 0 is assumed.

This example grabs from the start of the array/slice up to but not including the 2nd index.

// Create a new variable top, containing the first 2 users
top := users[:2] // [Tim Joe]

You can also omit the ending index and len(array) is assumed. This example grabs from the element at index 2 till the end of the array or slice.

// Create a new variable bottom, containing everything from index 2 till the 
end of bottom := users[2:] // [Bob Tom]

Creating Slice Literals

Another way we can create a slice is by creating a slice literal. The nice thing about creating slice literals is that they allow you to create the slice and populate it on the same line in your code. They are defined nearly identical to the way that array literals are defined. The difference is that we don't specify a size, or use the ellipses (...).

users := []string{"Tim", "Joe", "Bob", "Tom"}

We can also define a slice that is initialized and will have a 0 length underlying array by declaring it as we do any other variable. This variable has 0 length, until we initialize it with make() var s []int

Defining a slice with make()

One of the most common ways you might see a slice created is using the make() function.

  1. Type that is being made
  2. Initial size of the slice
  3. Initial capacity of the slice

The first argument for make is the concrete type. The second argument is the size of the slice. The third argument is the size of the underlying array, which is beneficial if you have an idea the size that this slice could grow to, in order to prevent new allocations and copying of the array when elements are appended as you'll see in the performance considerations section.

Getting Down to Business with Slices

We've learned what slices are, how to create them, and how to slice existing arrays and slices. So let's get back to our example and how we can utilize them to have a dynamic number of versions. Some of the commands we haven't talked about yet, but we'll go over them as we walk through how our new solution works.

This rewritten example application shows how you could use a slice to manage a list of versions of a document to manage UNDO functionality for a text editor type application.

In this example, you'll see the append() function for the first time, which shows you how easy it is to add items to an existing slice.

  1. Allocate a slice containing one element
  2. Retrieve the string at the last index, as it's always the most recent value
  3. Append the new version to the slice
  4. Reslice to remove the last version created, reverting to previous version as latest

The first change we made is we now use a slice to hold our versions, so that we aren't restricted to a maximum number of versions that we defined at compile time, we initialized that slice in main() using the make() command and the rest of main() remains the same.

The Changed() method has been modified to make use of append(). Let's talk a bit more about how append works.


We can dynamically adjust the size of a slice by using the append() method. The example above is adding new versions to the slice by appending items to the existing slice using this line of code:

versions = append(versions, 

If the underlying array is large enough, it places the value at the next index in the underlying array, and then returns a new slice object that contains all the items in the current slice, as well as the new item. If the underlying array is not large enough to support this new item, a new array with a larger size will be created, and the items from the old array will be copied, and the append will take place the same as above.

Remember this does not modify the original slice. It creates a new slice that contains this new element. So you need to assign it to a variable. In this case, we're reusing the same variable name, and assigning the newly created slice to it:

versions = append(versions, 

A little gotcha to be cognizant of is that if you have references to multiple slices of the same underlying array, and you append to a slice referencing an earlier segment of the array, it will grow into the space that slices containing later segments of the array have references to.

Let’s make this point more clear with an example. In the following code we'll make a slice of four strings. Then we'll create slices of that original slice, one with just the first element, and one with the last three elements. Now we'll append data to one of the slices and see what happens:

  1. Create a slice of four strings
  2. Take a slice of the users slice, with the first element
  3. Take a slice of the users slice with the last three elements
  4. Append an element to one of the slices
  5. The second slice references different data after the append

The reason for this is that just like if the array was empty, it fills the next element in the array then creates a new slice pointing one element further into the array. It has no knowledge of what is or isn't referencing this later section. Figure 2 shows what the memory looks like in this scenario, before and after they append.

Slice b is pointing to the users array from index 1 through index 3. Slice a is pointing to the users array containing only index 0. So when we append to slice a it grows by one, and into index 1 of the users array which slice b already sees at b[0], so the change is also reflected there.

Figure 2

Also remember just as we showed with pointers to arrays, although a slice is a reference type. Whenever you pass it to a function, or use append() you are getting a new slice, pointing to the same underlying array and starting and ending positions. ,/p>

Checking the capacity of a slice

Sometimes it's beneficial to see how much room the slice has left to grow before a new underlying array gets allocated. When performance matters, you may want to make your memory allocations in larger blocks, rather than one at a time. The cap() can be used for this.

s := make([]int, 0, 10) 
fmt.Println(cap(s)) // Outputs: 10

An important thing to note is that the capacity shows the size of the current slice, as well as the room left from the end of the slice to the end of the array. The capacity does not include any elements in the underlying array that are located before the first element in our slice. This can be demonstrated with the following code.

users := []string{"Tim", "Joe", "Bob", "Tom"} 
  // Create a slice of users, starting at index 0 and end at index 2
(exclusive)   top := users[0:2] // [Tim Joe]
  // Create a slice of users, starting at index 2 and end at index 3
(exclusive)   bottom := users[2:3] // [Bob]   fmt.Println(cap(top)) // Outputs: 4   fmt.Println(cap(bottom))
// Outputs: 2

Once you have created a slice, you can never gain access to earlier elements in the array, unless you have access to a slice that contains them, or the original array or slice.

As with our section on array internals, we will also go over the internals of how slices work and performance considerations, it's highly recommended you take a look over the next two sections to help prevent common mistakes, bugs, and performance issues that can creep up.

Slices in Action

Now let's make some changes to the Undo() function. To refresh your memory, we changed our original example to use slices with this line of code:

versions = versions[:len(versions)-

Here we now just reassign versions to a slice of itself. We slice the current versions slice from the first index, up to len(versions)-1 essentially creating a new slice that contains all but the last element in the original.

Because this is a slice, the underlying array remains unchanged just like our implementation with arrays, but instead of using a variable to hold the last index, we now have a slice that we can work with just like an array, and we only see the versions that have not been undone. Figure 3 shows us what our versions slice looks like now.

Figure 3

We demonstrate here that the array is growing at the same rate as the slice, but in actuality the Go runtime may increase the underlying array by larger amounts for efficiency to minimize the number of allocations, and garbage collections that need to occur. The important thing to notice is that we have a new slice that is a window into that array (or a new one in the case that we run out of capacity), and contains only the elements we've added.

Figure 4 shows what happens to our slice once we call Undo(). Notice that the underlying array still has the additional value in it, but our slice is only referencing up till the prior item.

Figure 4

As you can see in the bottom image of Figure 4, the Undo() function reslices the underlying array to include everything but the latest version.

Slice internals

To help understand slices it helps to think of them as a struct, with a pointer to an array, the starting element, the len, and the capacity. If we were to append here, a copy of this struct would be made, and len() would be incremented.

type Slice struct {
start int     len
int     cap int }

Slices work just like pointers, they are passed by value, but are reference types. So the slice itself would be copied, but an internal property still contains a reference to the original array. We cannot assign to the original slice parameter and have it seen by the caller.

The following example demonstrates this more succinctly. When you pass a slice to a function, make a change inside the function, and then reference the same slice later, the change doesn't show:

func addUser(users []string, user string){ 
      users = append(users, user)     
fmt.Println(users) // Outputs: [Joe Bob Tom Chris]   }
  func main() {
    // Declare and initialize users slice   
users := []string{"Joe", "Bob", "Tom"} 
   addUser(users, "Chris")
    fmt.Println(users) // Outputs: [Joe Bob Tom]   }  

Notice that the changes made inside the addUser() function were not reflected in the slice within the main() function. This is because the slices themselves are two separate objects, although they started out as pointing to the same array, and had the same length and capacity. The variable inside the addUser() method is a separate object in memory altogether, so we are just assigning the new slice to the variable local to the function. One way you might actually see the change reflected is if we append to a slice that has a capacity big enough for the new element:

 func addUser(users []string, user string){       users = append(users, user)       fmt.Println(users) 
// Outputs: [Tim Joe Bob Chris]
  func main() {
      users := []string{"Tim", "Joe", "Bob", "Tom"}
      // Here we pass a slice of the first 3 elements, so the call to append()
    // will overwrite the 4th element.      
addUser(users[:3], "Chris")
      fmt.Println(users) // Outputs [Tim Joe Bob Chris]   }  

Because we have a reference to the index that got filled in the underlying array we are able to see the change. Our original users slice points to the same indexes of the array, as the slice that was returned in the append() call. You cannot rely on this behavior though. If the slice does not have sufficient capacity, a new underlying array would be created, and the old slice in the main() method would still be referencing the original array.

Performance considerations

Knowing what we now know about slices containing pointers to an underlying array, we should also consider that if we sliced a very large slice or array, say the contents of a file, or a large request stream, even if it was a small section we are maintaining a reference to the potentially large underlying array until all the references to the slices are gone, thus preventing the garbage collector from freeing that memory.

If you need to return or pass along a small section of a large array we recommend using copy(). This will allow you to create a new slice only containing the desired elements, without the reference to the original backing array.

users := [4]string{"Tim", "Joe", "Bob", "Tom"} 
// Create a new slice with 2 elements, it must have a length of 2, not just
a capac  top := make([]string, 2)
// Copy the data from the first 2 positions of users, into top, until
len(top has  copy(top, users[0:2])  fmt.Println(top) // # Outputs: [Tim Joe]
// Let's output the capacity to make sure we only have 2 elements and not 4

fmt.Println(cap(top)) // Outputs: 2

Now that we have a new array and slice holding our two users, we are no longer holding a reference to the larger array when the method returns the garbage collector is free to clean up that memory.

Another important performance consideration is to try to create slices using make() that can support the growth of your array. For example if you know your array is going to grow to 1,000 elements, you can allocate an array of that size, which will prevent new arrays from being allocated and memory copied each time your array needs to grow. You can always have a variable that contains a reference to the original array or slice, as well as one containing the currently used section of the array. And you can shrink/grow the used slice as necessary and prevent allocations to garbage collection of the underlying arrays as your slice grows.

To learn more about the Go Language from Manning Publications visit Go in Action by Brian Ketelsen, Erik St. Martin, and William Kennedy. Save 39% on Go in Action with discount code gojn14.

Figure1.jpg36.59 KB
Figure2.jpg88.26 KB
Figure3.jpg116.62 KB
Figure4.jpg90.8 KB
Figure5b.jpg7.39 KB
Figure6b.jpg47.75 KB
Figure7b.jpg30.52 KB

Google's Big Question: What's Different Now?

In Eric Schmidt's presentation "How Google Works", he asks and answers the question "What's Different Now?" for businesses in the 21st century. And the answers he gives are:

1. Cloud computing puts a supercomputer in your pocket.
2. Mobile devices mean anyone can reach anyone, anywhere, anytime.
3. All the world's information and media is online.

It's worth asking how this applies to me and you, how we work and what we do; I'll try to give some answers from a personal perspective. And as I'm a Java performance guy, I'll consider it from that point of view too and also considerations for every IT professional.

1. "Cloud computing puts a supercomputer in your pocket"

Starting with this first one, I (and most of you) sadly will NOT have a cloud virtual machine on standby for random tasks. Well not right now anyway, though when you combine with the second observation (2 above) the implication is probably that ultimately we'll have exactly that - an always-on online personal supercomputer which my personal devices will become an interface to. Literally the interface to your personal supercomputer, in your pocket.

But for the next five years, that's not most people's reality. For most people in the next five years, personal cloud computing means access to a lot of storage online - so much that it's the bandwidth between your device and that storage which restricts how much you can use rather than the storage available (as far as non-storage services are concerned on a personal basis, you don't really care whether a service is in the cloud or not, so there's not much direct benefit to you of cloud computing other than storage).

On a professional IT basis, the cloud means that you have access to resources in a far more elastic way than you used to. But low latency doesn't mix well with the cloud (as opposed to high throughput which works brilliantly in the cloud), and if your resource requirements are relatively constant then dedicated servers are more cost-effective; so you need to consider carefully which services you run in the cloud. Though you should probably have some sort of cloud exposure on your CV.

With regard to Java performance, the consideration starts with the same issues - low latency, elastic vs constant resource requirements; but you have far more to consider such as multi-tenancy vs isolation; how to monitor elastic resources consistently; time drift across virtual machines; handling instance spin-up and initialization without impacting other services or request latencies; etc. The summary is that for Java performance, the cloud is a new environment with it's own very specific challenges that need independent consideration for testing, monitoring and analysis - you can't just take the techniques you use on a server or browser or client application or mobile app and transfer those to cloud services, you actually have to use different techniques, it's a completely new environment class to add to those four: it has a server-like environment, but resource competition similar to a browser and mobile app type unpredictability of uptime.

2. "Mobile devices mean anyone can reach anyone, anywhere, anytime."

Would you turn your phone off for a full day, to test how much you need it? We now use our phone almost as a cybernetic device that's part of us. Why would you turn that off? You wouldn't. So that means the above statement is pretty true - you are accessible anywhere, anytime. The only thing preventing anyone or anything actually using that reach is security by obscurity - they don't all know your number. So on a personal basis, be careful who you give you number to, it's relatively spam free if you control that.

On a professional IT basis, the mobile device is the user interface that will grow and grow. If your application doesn't take into account telecom systems and mobile devices, you will start to suffer; possibly not right away, but definitely within 5 years. Telecom enabling an application is fantastically straightforward using a company like Nexmo, where I work, and as we're intending to support all mobile device communication channels as they evolve, your telecom capability gets to be future-proofed. The mobile device is more and more involved in the identification process - right now the phone number is the ultimate user id (which is why you're getting verification by SMS); at Nexmo we provide additional capabilities to easily let you perform two-factor authentication, verification and send one-time-passwords to initiate new users, reset passwords, verify transactions and similar tasks.

On the Java performance side, optimizing for mobile devices is well understood, just follow the tips I extract on a regular basis in my monthly newsletter, eg
Matthew Carver's "Six Ways You're Using Responsive Design Wrong",
Tim Hinds's "Beginner's Guide to Mobile Performance Testing",
Caroline de Lacvivier's "FAQ: Testing mobile app performance",
Steve Weisfeldt's "Best Practices for Load Testing Mobile Applications".

3. "All the world's information and media is online"

You already know this and use it. From the IT perspective, the important thing is that you integrate or at least connect to anything that's relevant. An agent that processes the world's information for your particular application is an inevitable component, you're ahead of the curve if you have one already (well done you), but in five years you'll be behind if you don't.

From the Java performance perspective, integrating with multiple external sources is a massive headache that needs to be handled with care. You have to assume every type of network connection failure will happen: of course the usual non-connectivity; but also the more obscure connection that doesn't do anything (it's not really connected but doesn't tell your socket); connections that get enough bytes trickling in to prevent any timeout but so slow and with so many retries that your read goes on for hours; connections returning the data in an unexpected format; incorrect data (you must corroborate external data if you're relying on it to make a decision). The last two aren't really performance issues, but just illustrate how flakey the world's information is - can't work with it, but increasingly you can't work without it.

I think it's clear that Eric Schmidt's three answers are relevant considerations for your future plans, it's worth keeping them in mind.

D3: Making a Word Cloud an Effective Graphical Object

By Elijah Meeks for D3 in Action

One of the most popular information visualization charts in D3 is also one of the most maligned: the word cloud. Also known as a tag cloud, the word cloud uses text and text size to represent the importance or frequency of words. Figure 1 shows a thumbnail gallery of 15 word clouds derived from text in a species biodiversity database. Oftentimes, word clouds rotate the words to set them at right angles or jumble them at random angles to improve the appearance of the graphics.

The word clouds in Figure 1 were created with the popular Java applet Wordle, which provides an easy user interface and a few aesthetic customization choices. Because Wordle lets anyone create visually arresting graphics simply by dropping text onto a page, it flooded the Internet with word clouds. This caused much consternation among data visualization experts who think word clouds are evil because they embed no analysis in the visualization and only highlight superficial data such as the quantity of words in a blog post.

But word clouds are not evil. First of all, they’re popular with audiences. But more than that, words are remarkably effective graphical objects; if you can identify a numerical attribute that indicates the significance of a word, then scaling the size of a word in a word cloud will relay that significance to your reader.

Figure 1: A word cloud or tag cloud uses the size of a word to indicate its importance or frequency in a text, allowing for a visual summary of text. These word clouds were created by the popular online word cloud generator Wordle.

So let’s start by assuming we have the right kind of data for a word cloud. Fortunately, I do: the top twenty words used in this chapter, with the number of times each word appears.

Listing 1 worddata.csv


To create a word cloud with D3, we have to use a layout created by Jason Davies that isn’t in the core library, and implement an algorithm written by Jonathan Feinberg (http://static.mrfeinberg.com/bv_ch03.pdf). This layout, d3.layout.cloud(), is available on GitHub at https://github.com/jasondavies/d3-cloud. The layout requires that you define what attribute will determine word size and what size you want the word cloud to lay out for.

Unlike most other layouts, cloud() fires a custom event “end” that indicates it’s done calculating the most efficient use of space to generate the word cloud, to which it passes the processed dataset with the position, rotation and size of the words. Because of this, we can run the cloud layout without ever referring to it again, and don’t even need to assign it to a variable. Of course, if you plan to reuse it and adjust the settings, you would assign it to a variable just like you would any other layout.

      wordScale=d3.scale.linear().domain([0,75]).range([10,160]); #a

     .size([500, 500])
     .words(data) #b
     .fontSize(function(d) { return wordScale(d.frequency); }) #c
     .on("end", draw)
     .start(); #d

     function draw(words) { #e
     var wordG = d3.select("svg").append("g").attr("id", "wordCloudG").attr("transform","translate(250,250)");

     .style("font-size", function(d) { return d.size + "px"; })
       .style("opacity", .75)
       .attr("text-anchor", "middle")
       .attr("transform", function(d) {
         return "translate(" + [d.x, d.y] + ")rotate(" + d.rotate + ")";
       }) #f
       .text(function(d) { return d.text; });
randomRotate=d3.scale.linear().domain([0,1]).range([-20,20]); #a

#a A scale for the font rather than using raw values
#b You assign data to the cloud layout using .words()
#c Setting the size of each word using our scale
#d The cloud layout needs to be initialized, when it’s done it will fire “end” and run whatever function “end” is associated with
#e We’ve assigned draw() to “end”, which automatically passes the processed dataset as the words variable
#f Translation and rotation are calculated by the cloud layout

The result of this code is to create an SVG element that is rotated and placed according to the code. None of our words are rotated, so we get the pretty staid word cloud seen in Figure 2.

Defining rotation, though, is simple enough, and only requires that you set some rotation value in the cloud layout’s .rotate() function:

 randomRotate=d3.scale.linear().domain([0,1]).range([-20,20]); #a

     .size([500, 500])
     .rotate(function() {return randomRotate(Math.random())} ) #b
     .fontSize(function(d) { return wordScale(d.frequency); })
     .on("end", draw)

#a This scale will take a random number between 0 and 1 and return an angle between -20 degrees and 20 degrees
#b Set the rotation for each word

At this point, you have your traditional word cloud, and you can tweak the settings and colors to create anything you’ve seen on Wordle. In Figure 3, you can see how we jostled the words by randomly rotating the properties of each word.

Now that we have our word cloud, let’s take a look at why word clouds get such a bad reputation. We’ve taken an interesting dataset, the most common words in this chapter, and other than size them by their frequency, we’ve done little more than place them on screen and jostle them a bit. Remember that there are different channels for expressing data visually, and in this case the best channels that we have, besides size, are color and rotation.

With that in mind, let’s imagine that we have a keyword list for this book, and that each of these words is in a glossary in the back of the book. We’ll place those keywords in an array and use them to highlight the words in our word cloud that appear in the glossary. We’ll also rotate shorter words 90 degrees and leave the longer words unrotated so that they’ll be easier to read.

Listing 2 Word cloud layout with key word highlighting
   var keywords = ["layout", "zoom", "circle", "style", "append", "attr"] #a
     .size([500, 500])
     .rotate(function(d) { return d.text.length > 5 ? 0 : 90; }) #b
     .fontSize(function(d) { return wordScale(d.frequency); })
     .on("end", draw)
     function draw(words) {
     var wordG = d3.select("svg").append("g").attr("id", "wordCloudG").attr("transform","translate(250,250)");

     .style("font-size", function(d) { return d.size + "px"; })
       .style("fill", function(d) { return (keywords.indexOf(d.text) > -1 ? "red" : "black"); }) #c
       .style("opacity", .75)
       .attr("text-anchor", "middle")
       .attr("transform", function(d) {
         return "translate(" + [d.x, d.y] + ") rotate(" + d.rotate + ")";
       .text(function(d) { return d.text; });

#a Our array of keywords
#b The rotate function makes every word with six or more characters have a rotation of 90 degrees
#c If the word appears in the keyword list, color it red, otherwise color it black

The result seen in Figure 4 is fundamentally the same word cloud, but instead of using color and rotation for aesthetics, we used them to encode information in the dataset. There are more controls over the format of your word cloud than you can see in the layout’s documentation at https://www.jasondavies.com/wordcloud/about/, including selecting fonts and padding. Layouts like word cloud aren’t suitable for as wide a variety of data as some other layouts are, but because they’re so easy to deploy and customize, you can combine them with other charts to represent the multiple facets of your data.

This article, based on D3 in Action by Elijah Meeks, will demonstrate how that works.

Save 40% on D3 in Action with discount code d3jn14 at manning.com.

D3B.jpg62.01 KB
D3C.jpg59.84 KB
D3D.jpg84.77 KB
D3A.jpg105.6 KB

What Probabilistic Programming is and How to Use it

By Avi Pfeffer for Practical Probablistic Programming
Save 40% on Practical Probabilistic Programming with code pppjn at manning.com.

Probabilistic programming is a way to create systems that help us make decisions in the face of uncertainty. Probabilistic reasoning combines our knowledge of a situation with the laws of probability to determine those unobserved factors that are critical to the decision. Until recently, probabilistic reasoning systems have been limited in scope, and have been hard to apply to many real world situations. Probabilistic programming is a new approach that makes probabilistic reasoning systems easier to build and more widely applicable.

To explain probabilistic programming, we’ll start out looking at decision making under uncertainty and the judgment calls involved. Then we’ll see how probabilistic reasoning can help make these decisions. We’ll look at three specific kinds of reasoning that probabilistic reasoning systems can do. Then we’ll be able to understand probabilistic programming and how it can be used to build probabilistic reasoning systems through the power of programming languages.

How Do We Make Judgment Calls?

In the real world, there are rarely clear yes or no answers to the questions we care about. If we’re launching a new product, we want to know if it will sell well. We might think it will be successful, because we believe it is well designed and our market research indicates there is a need for it, but we can’t be sure. Maybe our competitor will come out with an even better product, or maybe it has some fatal flaw that will turn off the market, or maybe the economy will take a sudden turn for the worse. If we rely on being 100 percent sure, we will not be able to make the decision of whether or not to launch the product. (Figure 1).

Figure 1 - Last year everyone loved my product, what will happen next year?

The language of probability can help make decisions like these. When launching a product, we can use prior experience with similar products to estimate the probability the product will be successful. We can then use this probability to help decide whether to go ahead in launching the product. We can provide the probabilities of different outcomes to make more informed decisions.

Okay, so probabilistic thinking can help us make hard decisions and judgment calls. But how do we do that? Here’s the general principle:

FACT: A judgment call is based on knowledge + logic

We have some knowledge of the problem we’re interested in. For example, we know a lot about our own product, and we might have done some market research to find out what customers want. We also might have some intelligence about our competitors and access to economic predictions. Meanwhile, the logic helps us get answers to our questions using the knowledge.

So, we have to have a way of specifying the knowledge, and we have to have logic for getting answers to our questions using the knowledge. Probabilistic programming is all about providing ways to specify the knowledge and logic to answer questions. Before I describe what a probabilistic programming system is, I’ll describe the more general category of probabilistic reasoning system, which provides the basic means to specify knowledge and provide logic.

Probabilistic Reasoning Systems Help Make Decisions

Probabilistic reasoning is an approach that uses a model of your domain to make decisions under uncertainty. Let’s take an example from the world of soccer. Suppose the statistics show that 9 percent of corner kicks result in a goal. You’re tasked with predicting the outcome of a particular corner kick. The attacking team’s center forward is 6’ 4’’ and known for her heading ability. The defending team’s regular goalkeeper was just carted off on a stretcher and has been replaced by a substitute playing her first game. Besides that, there’s a howling wind that makes it difficult to control long kicks. So how do you figure out the probability?

Figure 2 shows how you would use a probabilistic reasoning system to find the answer. You would encode your knowledge about corner kicks and all the relevant factors in a corner kick model. You would then supply evidence about this particular corner kick, namely that the center forward is tall, the goalie is inexperienced, and the wind is strong. You tell the system that you want to know whether a goal will be scored. The inference algorithm returns the answer a goal will be scored with probability 20 percent.

Figure 2: How a probabilistic reasoning system predicts the outcome of a corner kick


General knowledge: what you know to hold true of your domain in general terms, without considering the details of a particular situation

Probabilistic model: an encoding of general knowledge about a domain in quantitative, probabilistic terms

Evidence: specific information you have about a particular situation

Query: a property of the situation you want to know

Inference: the process of using a probabilistic model to answer a query given evidence

In probabilistic reasoning, you create a model that captures all the relevant general knowledge of your domain in quantitative, probabilistic terms. In our example, the model might be a description of a corner kick situation and all the relevant aspects of players and conditions that affect the outcome. Then, for a particular situation, you apply the model to any specific information you have to draw conclusions. This specific information is called the evidence. In this example, the evidence is that the center forward is tall, the goalie is inexperienced, and the wind is strong. The conclusions you draw can help you make decisions, for example, whether you should get a different goalie for the next game. The conclusions themselves are framed probabilistically, like the probability of different skill levels of the goalie.

The relationship between the model, the information you provide, and the answers to queries, is well defined mathematically by the laws of probability. The process of using the model to answer queries based on the evidence is called probabilistic inference or simply inference. Fortunately, computer algorithms have been developed that do the math for you and make all the necessary calculations automatically. These algorithms are called inference algorithms.

Figure 3 summarizes what we’ve just learned.

Figure 3: The basic components of a probabilistic reasoning system

So those, in a nutshell, are the constituents of a probabilistic reasoning system and how you interact with one. But what can you do with such a system? How does it help to make decisions? The next section describes three kinds of reasoning that can be performed by a probabilistic reasoning system.

Probabilistic Reasoning Systems Can Reason In Three Ways

Probabilistic reasoning systems are very flexible. They can answer queries about any aspect of your situation given evidence about any other aspect. In practice, there are three kinds of reasoning that probabilistic reasoning systems do.

1. Predict future events. We’ve already seen this in Figure 2, where we predict whether a goal will be scored based on the current situation. Your evidence will typically consist of information about the current situation, such as the height of the center forward, the experience of the goalie, and the strength of the wind.

2. Infer the cause of events. Fast forward ten seconds. The tall center forward just scored a goal with a header, squirting under the body of the goalie. What do you think of this rookie goalkeeper, given this evidence? Can you conclude that she is poorly skilled? 4 shows how you would use a probabilistic reasoning system to answer these questions. The model is the same corner kick model you used before to predict whether a goal would be scored. (This is a useful property of probabilistic reasoning: the same model can be used to predict a future result as to infer what caused that result afterwards.) The evidence here is the same as before, together with the fact that a goal was scored. The query is the quality of the goalie, and the answer provides the probability of various qualities.

Figure 4: By altering the query and evidence, the system can now infer why a goal was scored

3. Learn from past events to better predict future events. Now fast forward another ten minutes. The same team has won another corner kick. Everything is similar to before in this new situation—tall center forward, inexperienced goalie—but now the wind has died down. Using probabilistic reasoning, you can use what happened in the previous kick to help you predict what will happen on the next kick. Figure 5 shows how you can do this. The evidence includes all evidence from last time (making a note that it was from last time), as well as the new information about the current situation. In answering the query about whether a goal will be scored this time, the inference algorithm first infers properties of the situation that led to a goal being scored the first time, such as the quality of the center forward and goalie. It then uses these updated properties to make a prediction about the new situation.

Figure 5: By taking into account evidence from the outcome of the last corner kick, the probabilistic reasoning system can produce a better prediction of the next corner kick.

If we think about this last kind of reasoning, we can see that this is a kind of machine learning. The system is learning from past events to better predict future events. In our example, we just learned from a single past event, but in general, we might have many past events, like a whole season’s worth of soccer games, to learn from.


Like any machine learning system, a probabilistic reasoning system will be more accurate the more data you give it. The quality of the predictions depends on two things: the degree to which the original model accurately reflects real-world situations, and the amount of data you provide. In general, the more data you provide, the less important important the original model is. For example, if you’re learning from an entire soccer season, you should be able to learn the factors that contribute to a corner kick quite accurately. If you only have one game, you will need to start out with a good idea of the factors to be able to make accurate predictions about that game. In general, probabilistic reasoning systems will make good use of the given model and available data to make as accurate a prediction as possible.

All of these types of queries can help make decisions, on many levels.

  • We can decide whether to substitute a defender for an attacker based on the probability a goal will be scored with or without the extra defender.
  • We can decide how much to offer the goalie in her next contract negotiation based on our assessment of her skill.
  • We can decide whether to use the same goalie in the next game by using what we have learned about the goalie to help predict the outcome of the next game.

So now we know what probabilistic reasoning is. What then, is probabilistic programming?

Probabilistic Programming Systems : Probabilistic Reasoning Systems expressed in a Programming Language

Every probabilistic reasoning system uses a representation language to express its probabilistic models. There are a lot of representation languages out there. You may have heard of some of them, such as Bayesian networks (also known as belief networks) and hidden Markov models. The representation language controls what models can be handled by the system and what they look like. The set of models that can be represented by a language is called the expressive power of the language. For practical applications, we’d like to have as large an expressive power as possible.

A probabilistic programming system is, very simply, a probabilistic reasoning system in which the representation language is a programming language. When I say programming language, I mean that it has all the features you typically expect in a programming language, like variables, a rich variety of data types, control flow, functions, and so on. As we’ll come to see, probabilistic programming languages are able to express an extremely wide variety of probabilistic models and go far beyond most traditional probabilistic reasoning frameworks. In other words, probabilistic programming languages have very large expressive power.

Figure 6 illustrates the relationship between probabilistic programming systems and probabilistic reasoning systems in general. The figure is based on Figure 3 and the annotations in red are taken exactly from that figure. The annotations in blue show what changes in a probabilistic programming system. The main change is that models are expressed as programs in a programming language rather than as a mathematical construct like a Bayesian network. As a result of this change, evidence, queries, and answers all apply to variables in the program. So, evidence might specify particular values for program variables, queries ask for the values of program variables, and answers are probabilities of different values of the query variables. In addition, a probabilistic programming system typically comes with a suite of inference algorithms. These algorithms apply to programs written in the language.

Figure 6: A probabilistic programming system is a probabilistic reasoning system that uses a programming language to represent probabilistic models

Although there are many kinds of probabilistic programming systems, the book on which this article is based, Practical Probabilistic Programming, focuses on functional, Turing-complete systems. Functional means that they are based on functional programming, but don’t let that scare you—you don’t need to know concepts like lambda functions to use functional probabilistic programming systems.

All this means is that functional programming provides the theoretical foundation behind these languages that lets them represent probabilistic models. Meanwhile, Turing-complete is jargon for a programming language that can encode any computation that can be done on a digital computer. In other words, if something can be done on a digital computer, it can be done with any Turing-complete language. Most of the programming languages you are familiar with, such as C, Java, and Python, are Turing-complete. Since probabilistic programming languages are built on Turing-complete programming languages, they are extremely flexible in the types of models that can be built. Click here to check out the book.

PPP2.jpg99.04 KB
PPP3.jpg85.45 KB
PPP4.png47.48 KB
PPP5.jpg86.12 KB
PPP6.jpg117.87 KB
PPP7.jpg105.17 KB

Grokking Algorithms by Aditya Y. Bhargava - MEAP update + 40% savings

Grokking Algorithms is 40% off with discount code grkamujn at manning.com.

Grokking Algorithms is a disarming take on a core computer science topic. In it, you'll learn how to apply common algorithms to the practical problems you face in day-to-day life as a programmer. You'll start with problems like sorting and searching. As you build up your skills in thinking algorithmically, you'll tackle more complex concerns such as data compression or artificial intelligence. Whether you're writing business software, video games, mobile apps, or system utilities, you'll learn algorithmic techniques for solving problems that you thought were out of your grasp.

What's new?
Chapter 5, "Hash Tables"
Hash tables are a very powerful data structure because they are fast and they let you model data in a different way. Chapter 5 explains how hash tables are built and looks at some of the ways they are used. You might soon find that you are using them all the time.

What's next?
Chapter 6, "Breadt

Read Chapter: Introduction to Algorithms here.

Save 40% with discount code grkamujn at manning.com.

JavaEE Tip #2 - Location of the JavaEE tutorial

And it is tutorial time!

Where are the JavaEE tutorials for each of the JavaEE versions?


All brand names,logos and trademarks in this site are property of their respective owners.

-  Free Magazines

Free Magazine
-  News
  Wireless Java
Industry News
  CNET News
  CNET E-Business
  CNET Enterprise
-  Weblogs
James Gosling's
-  Tell A Friend
Tell others
Free eBooks |  About |  Disclaimer |  Terms Of Use |  Privacy Policy
Copyright 2001-2006 Gayan Balasooriya.   
All Rights Reserved.