Sunday, December 26, 2010

How to Dump/Inspect Object or Variable in Java

Scala (console) has a very useful feature to inspect or dump variables / object values :

scala> def b = Map("name" -> "Yudha", "age" -> 27)
b: scala.collection.immutable.Map[java.lang.String,Any]

scala> b
res1: scala.collection.immutable.Map[java.lang.String,Any] = Map((name,Yudha), (age,27))

Inside our application, especially in Java programming language (although the techniques below obviously works with any JVM language like Scala and Groovy) sometimes we want to inspect/dump the content of an object/value. Probably for debugging or logging purposes.

My two favorite techniques is just to serialize the Java object to JSON and/or XML. An added benefit is that it's possible to deserialize the dumped object representation back to an actual object if you want.

JSON Serialization with Jackson

Depend on Jackson (using Maven):
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.6.3</version>
</dependency>
Then use it:
import org.codehaus.jackson.JsonGenerationException;
import org.codehaus.jackson.map.JsonMappingException;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.SerializationConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

..
Logger logger = LoggerFactory.getLogger(getClass());

@Test
public void level() throws ServiceException, JsonGenerationException, JsonMappingException, IOException {
MagentoServiceLocator locator = new MagentoServiceLocator();
Mage_Api_Model_Server_HandlerPortType port = locator.getMage_Api_Model_Server_HandlerPort();
String sessionId = port.login("...", "...");
logger.info(String.format("Session ID = %s", sessionId));
Map[] categories = (Map[]) port.call(sessionId, "catalog_category.level", new Object[] { null, null, 2 } );
ObjectMapper mapper = new ObjectMapper();
mapper.configure(SerializationConfig.Feature.INDENT_OUTPUT, true);
logger.info( mapper.writeValueAsString(categories) );
}

Example output :

6883 [main] INFO id.co.bippo.shop.magentoclient.AppTest - [ {
  "position" : "1",
  "level" : "2",
  "is_active" : "1",
  "name" : "Gamis",
  "category_id" : "3",
  "parent_id" : 2
}, {
  "position" : "2",
  "level" : "2",
  "is_active" : "1",
  "name" : "Celana",
  "category_id" : "5",
  "parent_id" : 2
} ]

XML Serialization with XStream

As a pre-note, XStream can also handle JSON with either Jettison or its own JSON driver, however people usually prefer Jackson than XStream for JSON serialization.

Maven dependency for XStream:
<dependency>
<groupId>xstream</groupId>
<artifactId>xstream</artifactId>
<version>1.2.2</version>
</dependency>
Use it:
import java.io.IOException;
import java.rmi.RemoteException;
import java.util.Map;

import javax.xml.rpc.ServiceException;

import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.thoughtworks.xstream.XStream;
...
@Test
public void infoXml() throws ServiceException, RemoteException {
MagentoServiceLocator locator = new MagentoServiceLocator();
Mage_Api_Model_Server_HandlerPortType port = locator.getMage_Api_Model_Server_HandlerPort();
String sessionId = port.login("...", "...");
logger.info(String.format("Session ID = %s", sessionId));
Map category = (Map) port.call(sessionId, "catalog_category.info",
new Object[] { 3 } );
XStream xstream = new XStream();
logger.info( xstream.toXML(category) );
}

Sample output:

5949 [main] INFO id.co.bippo.shop.magentoclient.AppTest - <map>
  <entry>
    <string>position</string>
    <string>1</string>
  </entry>
  <entry>
    <string>custom_design</string>
    <string></string>
  </entry>
  <entry>
    <string>custom_use_parent_settings</string>
    <string>0</string>
  </entry>
  <entry>
    <string>custom_layout_update</string>
    <string></string>
  </entry>
  <entry>
    <string>include_in_menu</string>
    <string>1</string>
  </entry>
  <entry>
    <string>custom_apply_to_products</string>
    <string>0</string>
  </entry>
  <entry>
    <string>meta_keywords</string>
    <string>gamis, busana muslim</string>
  </entry>
  <entry>
    <string>available_sort_by</string>
    <string></string>
  </entry>
  <entry>
    <string>url_path</string>
    <string>gamis.html</string>
  </entry>
  <entry>
    <string>children</string>
    <string></string>
  </entry>
  <entry>
    <string>landing_page</string>
    <null/>
  </entry>
  <entry>
    <string>display_mode</string>
    <string>PRODUCTS</string>
  </entry>
  <entry>
    <string>level</string>
    <string>2</string>
  </entry>
  <entry>
    <string>description</string>
    <string>Gamis untuk muslimah</string>
  </entry>
  <entry>
    <string>name</string>
    <string>Gamis</string>
  </entry>
  <entry>
    <string>path</string>
    <string>1/2/3</string>
  </entry>
  <entry>
    <string>created_at</string>
    <string>2010-12-24 11:37:41</string>
  </entry>
  <entry>
    <string>children_count</string>
    <string>0</string>
  </entry>
  <entry>
    <string>is_anchor</string>
    <string>1</string>
  </entry>
  <entry>
    <string>url_key</string>
    <string>gamis</string>
  </entry>
  <entry>
    <string>parent_id</string>
    <int>2</int>
  </entry>
  <entry>
    <string>filter_price_range</string>
    <null/>
  </entry>
  <entry>
    <string>all_children</string>
    <string>3</string>
  </entry>
  <entry>
    <string>is_active</string>
    <string>1</string>
  </entry>
  <entry>
    <string>page_layout</string>
    <string></string>
  </entry>
  <entry>
    <string>image</string>
    <null/>
  </entry>
  <entry>
    <string>category_id</string>
    <string>3</string>
  </entry>
  <entry>
    <string>default_sort_by</string>
    <null/>
  </entry>
  <entry>
    <string>custom_design_from</string>
    <null/>
  </entry>
  <entry>
    <string>updated_at</string>
    <string>2010-12-24 11:37:41</string>
  </entry>
  <entry>
    <string>meta_description</string>
    <string>Jual baju gamis untuk muslim</string>
  </entry>
  <entry>
    <string>custom_design_to</string>
    <null/>
  </entry>
  <entry>
    <string>path_in_store</string>
    <null/>
  </entry>
  <entry>
    <string>meta_title</string>
    <string>Gamis</string>
  </entry>
  <entry>
    <string>increment_id</string>
    <null/>
  </entry>
</map>

Which one is better?

I personally prefer JSON, but fortunately, you always have a choice. :-)

Monday, December 20, 2010

Eclipse RAP Single Sourcing Awesomeness (with EMF Editor and Teneo+Hibernate as bonus!)

Eclipse Rich Client Platform has come a looong way since it was first introduced (and used in Eclipse IDE). The new Eclipse RAP (Rich Application Platform) is also becoming more and more attractive for deploying existing or new Eclipse RCP applications to the web.

One of my the projects I'm working on is developed on top of Eclipse RCP. It uses additional plugins such as EMF (Eclipse Modeling Framework) including EMF Editor UI, Teneo (EMF Persistence for Relational Databases), and Hibernate.

After some work, I managed to run the whole application on both Eclipse RCP (desktop) and Eclipse RAP (web-based). See the screenshots for proof.

Thanks to the recently released EMF Support for RAP I don't have to let go any of the nice EMF generated editor UIs for the web-based RAP version.

What's amazing is how little the work I have to do to port the RCP app to RAP.

The changes I needed to do is not changing code, but juggling dependencies to plugins and/or packages. Also creating a few platform-specific plugins (different based on whether I deploy on RCP or RAP).

It boils down to:

  1. Do not hard-depend on org.eclipse.ui plugin. Either depend on both org.eclipse.ui and org.eclipse.rap.ui plugins as optional dependencies, or import the specific packages. I prefer optional dependency on both plugins because it's much faster and easier.
  2. Be aware that there will be multiple sessions at once.
  3. 2D Drawing functions are not yet fully available. (and I guess will never be available)

See the Eclipse RAP FAQ on Single Sourcing for more information.

Fixing error: The type org.eclipse.core.runtime.IAdaptable cannot be resolved. It is indirectly referenced from required .class files.

If you get one of the following errors :

The type org.eclipse.core.runtime.IAdaptable cannot be resolved. It is indirectly referenced from required .class files.

The type org.eclipse.core.runtime.CoreException cannot be resolved. It is indirectly referenced from required .class files.

First of all, check that your plugins depend on org.eclipse.core.runtime plugin.

The classes above are located in org.eclipse.equinox.common plugin, and should be included (along with org.eclipse.core.runtime plugin) in your target platform.

If it still occurs, most likely you get your target platform plugins mixed up. I got this error because I tried to mix plugins from my Eclipse IDE installation (Helios 3.6-SR-1) with Eclipse 3.7M4 Platform plugins.

Simply unchecking the plugins from target platform Contents tab is not enough. I have to actually remove the location. If you notice duplicate plugins in your Target Platform Definition's Contents tab, then you're getting this problem.

Remove the offending plugins location from the target platform definition. If you must add plugins from Eclipse IDE location, cherry pick each plugin from the Locations UI, instead of just adding the whole SDK and unchecking them from the Contents tab.

Sunday, December 19, 2010

Creating an About Dialog for your Eclipse RCP Application

Displaying an About box in your Eclipse RCP Application/Product is actually very simple. However as is typical of a framework ("don't call us, we'll call you" principle), there are conventions you must follow.

Before you do this, you must already create an Eclipse RCP Application class that implements IApplication and register it as an Eclipse extension in your plugin.xml.

Adding the Help > About Menu Item


First, edit your ApplicationActionBarAdvisor class (which extends org.eclipse.ui.application.ActionBarAdvisor base class), as follows:

    protected void makeActions(IWorkbenchWindow window) {
        aboutAction = ActionFactory.ABOUT.create(window);
        register(aboutAction);
    }

That will register the About action. You will also need to add the menu action itself to the menu bar:

    protected void fillMenuBar(IMenuManager menuBar) {
        MenuManager helpMenu = new MenuManager("&Help", IWorkbenchActionConstants.M_HELP);
       
        menuBar.add(helpMenu);
        helpMenu.add(aboutAction);
    }

Launch your Eclipse RCP application and you can display the About dialog.

Customizing the About Dialog Box


To customize the About dialog box's contents, first you must create an Eclipse RCP Product Configuration.
Click File > New > Product Configuration and define your product.

This should also add your product to extension point 'org.eclipse.core.runtime.products'.

Edit your project's Launch Configuration to use the your Product Configuration instead of Eclipse Application. Verify that it works well.

Now you can customize the About box contents by editing your Product Configuration (.product file), go to Branding tab and About Dialog section. Specify the About Image and About Text there.

To specify the image, import an image resource (PNG, GIF, JPG) to your plugin project (e.g. inside an /icons folder). Important: Your image will need to be included inside your resulting plugin binary. Edit your plugin's manifest, go to Build tab, and make sure your images/icons are included (checked) for the binary build and source build.

After editing your Product Configuration, you must synchronize it with the plugin manifest. Go to the Product's Overview tab and click Synchronize (under Testing).

That action will update several properties in the org.eclipse.core.runtime.products extension, for example:

   <extension
         id="abispulsa_rcp"
         point="org.eclipse.core.runtime.products">
      <product
            application="com.abispulsa.bisnis.rcp.application"
            description="Layanan bagi korporasi untuk dapat dengan mudah mengisi pulsa."
            name="AbisPulsa Bisnis RCP">
         <property
               name="appName"
               value="AbisPulsa Bisnis RCP">
         </property>
         <property
               name="aboutText"
               value="AbisPulsa Bisnis merupakan layanan bagi korporasi untuk dapat dengan mudah mengisi pulsa.">
         </property>
         <property
               name="aboutImage"
               value="icons/AbisPulsa_icon_75x75.png">
         </property>
      </product>
   </extension>

For your information: Other properties you can use include:

  • windowImages
  • aboutImage
  • aboutText
  • appName
  • welcomePage
  • preferenceCustomization

References:
  1. http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.platform.doc.isv/guide/product_def_extpt.htm
  2. http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.platform.doc.isv/reference/extension-points/org_eclipse_core_runtime_products.html

Fixing Eclipse RCP Launch Error: Application "org.eclipse.ui.ide.workbench" could not be found in the registry.

If you encounter the following error message when launching your Eclipse RCP application/plugin :

!ENTRY org.eclipse.osgi 4 0 2010-12-20 00:49:08.433
!MESSAGE Application error
!STACK 1
java.lang.RuntimeException: Application "org.eclipse.ui.ide.workbench" could not be found in the registry. The applications available are: org.eclipse.ant.core.antRunner, org.eclipse.jdt.core.JavaCodeFormatter, org.eclipse.help.base.infocenterApplication, org.eclipse.help.base.helpApplication, org.eclipse.help.base.indexTool, com.abispulsa.bisnis.rcp.application, org.eclipse.equinox.app.error.
at org.eclipse.equinox.internal.app.EclipseAppContainer.startDefaultApp(EclipseAppContainer.java:248)
at org.eclipse.equinox.internal.app.MainApplicationLauncher.run(MainApplicationLauncher.java:29)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574)
at org.eclipse.equinox.launcher.Main.run(Main.java:1407)
at org.eclipse.equinox.launcher.Main.main(Main.java:1383)

It means that you haven't added the following plugin to your target platform / enabled plugins:

org.eclipse.ui.ide.application

That means you'll use the default "workbench" application as used by Eclipse IDE.

When you've created your own application class which implements org.eclipse.equinox.app.IApplication interface, you need to register an Eclipse extension to org.eclipse.core.runtime.applications in your plugin.xml like the following example:

   <extension id="application"
         point="org.eclipse.core.runtime.applications">
      <application>
         <run class="com.abispulsa.bisnis.rcp.Application">
         </run>
      </application>
   </extension>

Then edit your Eclipse Application launch configuration to use your own application class.

Mirroring an Eclipse Update Site to a Local p2 Repository

Installing or updating Eclipse plugins/features is easy using Eclipse p2 Update Sites. However, since it requires downloading from the Internet the process is often very slow.

Some projects provide an archived update site (.zip file) but the rest do not provide them. When installing or updating features for multiple Eclipse IDE or RCP Application installations, downloading the same files multiple times from the Internet can get annoying. Not to mention it definitely wastes precious time AND bandwidth.

Thankfully there is a way to create a local p2 repository that acts as a mirror site to the original Eclipse p2 Update Sites.

This is useful for making available a full Eclipse release or a set of Eclipse features/plugins to internal corporate users, for example, reducing the bandwidth normally used with dozens of users downloading the same bits from external Eclipse p2 Update sites.

Documentation


First, you need to mirror the site (or a particular feature), so take a look at the mirror command described here:
Running Update Manager from Command Line

then create a site policy (a type of redirection) as described here:
Controlling the Eclipse Update Policy

Command Examples


You can start the update manager in a standalone mode to create a mirror of an update site by using
this command:

java -Dhttp.proxyHost=yourProxy -Dhttp.proxyPort=yourProxyPort \
  -jar plugins/org.eclipse.equinox.launcher_<version>.jar \
  -application org.eclipse.update.core.standaloneUpdate -command mirror \
  -from %updateSiteToMirror% -mirrorUrl %urlOfYourUpdateSite% \
  -to %fileLocationToMirrorTo%

Run this command from Eclipse install directory (i.e. where startup.jar is).
Replace %fileLocationToMirrorTo% with the local directory where the update site contents will be copied to, and %urlOfYourUpdateSite% with a URL which will point to the directory you informed.

Of course you will need to install a local web server, like Apache HTTPD and configure it according the directory/URL you specified before.

It even supports to create one mirror-site of multiple sites if you specify the same location for multiple sites it will append them to the site.xml giving you one big (and messy) update site.

An easy way to use this is use a dos or bash scipt ofcourse. For example the following script to mirror the relevant update sites:

set LAUNCHER=C:\opt\springsource-2.1\sts-2.1.0.RELEASE\plugins/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar

call updateSite http://subclipse.tigris.org/update_1.6.x subclipse
call updateSite http://pmd.sourceforge.net/eclipse pmd
call updateSite http://m2eclipse.sonatype.org/update/ m2eclipse
call updateSite http://findbugs.cs.umd.edu/eclipse/  findbugs
call updateSite http://moreunit.sourceforge.net/org.moreunit.updatesite/  moreunit
call updateSite http://www.springsource.com/update/e3.5 sprinsource-e35
call updateSite http://eclipse-cs.sf.net/update checkstyle
call updateSite http://update.atlassian.com/atlassian-eclipse-plugin atlassian
call updateSite http://commonclipse.sourceforge.net commonclipse
call updateSite https://ajax.dev.java.net/eclipse glassfish
call updateSite http://andrei.gmxhome.de/eclipse/ gmx-plugins
call updateSite http://regex-util.sourceforge.net/update/ regex
call updateSite http://ucdetector.sourceforge.net/update/ ucdetector

goto:eof

:updateSite
java -Dhttp.proxyHost=yourProxy -Dhttp.proxyPort=yourProxyPort -jar %LAUNCHER% -application org.eclipse.update.core.standaloneUpdate -command mirror -from %1 -mirrorUrl http://server/eclipseupdatesite/%2 -to Y:\%2 &goto:eof
goto:eof
This gives us multiple update sites under http://server/eclipseupdatesite/ like http://server/eclipseupdatesite/m2eclipse etc. Of course you still need 1 computer to have unrestricted/fast internet access, but you can always create those sites at home.

Aggregating Specific Features

You can also aggregate several features from other update sites to your own, using either p2.mirror, p2 Composite Repositories, b3, or Nexus Pro.

See http://stackoverflow.com/questions/4378112/p2-repositories-aggregator


Sources:

  1. http://dev.eclipse.org/newslists/news.eclipse.platform/msg29529.html
  2. http://stackoverflow.com/questions/4378112/p2-repositories-aggregator
  3. http://www.willianmitsuda.com/2007/03/09/mirroring-callisto-update-site/
  4. http://www.denoo.info/2009/09/mirroring-eclipse-update-sites/

Mirroring an Eclipse Update Site to a Local p2 Repository

Installing or updating Eclipse plugins/features is easy using Eclipse p2 Update Sites. However, since it requires downloading from the Internet the process is often very slow.

Some projects provide an archived update site (.zip file) but the rest do not provide them. When installing or updating features for multiple Eclipse IDE or RCP Application installations, downloading the same files multiple times from the Internet can get annoying. Not to mention it definitely wastes precious time AND bandwidth.

Thankfully there is a way to create a local p2 repository that acts as a mirror site to the original Eclipse p2 Update Sites.

This is useful for making available a full Eclipse release or a set of Eclipse features/plugins to internal corporate users, for example, reducing the bandwidth normally used with dozens of users downloading the same bits from external Eclipse p2 Update sites.

Documentation


First, you need to mirror the site (or a particular feature), so take a look at the mirror command described here:
Running Update Manager from Command Line

then create a site policy (a type of redirection) as described here:
Controlling the Eclipse Update Policy

Command Examples


You can start the update manager in a standalone mode to create a mirror of an update site by using
this command:

java -Dhttp.proxyHost=yourProxy -Dhttp.proxyPort=yourProxyPort \
  -jar plugins/org.eclipse.equinox.launcher_<version>.jar \
  -application org.eclipse.update.core.standaloneUpdate -command mirror \
  -from %updateSiteToMirror% -mirrorUrl %urlOfYourUpdateSite% \
  -to %fileLocationToMirrorTo%

Run this command from Eclipse install directory (i.e. where startup.jar is).
Replace %fileLocationToMirrorTo% with the local directory where the update site contents will be copied to, and %urlOfYourUpdateSite% with a URL which will point to the directory you informed.

Of course you will need to install a local web server, like Apache HTTPD and configure it according the directory/URL you specified before.

It even supports to create one mirror-site of multiple sites if you specify the same location for multiple sites it will append them to the site.xml giving you one big (and messy) update site.

An easy way to use this is use a dos or bash scipt ofcourse. For example the following script to mirror the relevant update sites:

set LAUNCHER=C:\opt\springsource-2.1\sts-2.1.0.RELEASE\plugins/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar

call updateSite http://subclipse.tigris.org/update_1.6.x subclipse
call updateSite http://pmd.sourceforge.net/eclipse pmd
call updateSite http://m2eclipse.sonatype.org/update/ m2eclipse
call updateSite http://findbugs.cs.umd.edu/eclipse/  findbugs
call updateSite http://moreunit.sourceforge.net/org.moreunit.updatesite/  moreunit
call updateSite http://www.springsource.com/update/e3.5 sprinsource-e35
call updateSite http://eclipse-cs.sf.net/update checkstyle
call updateSite http://update.atlassian.com/atlassian-eclipse-plugin atlassian
call updateSite http://commonclipse.sourceforge.net commonclipse
call updateSite https://ajax.dev.java.net/eclipse glassfish
call updateSite http://andrei.gmxhome.de/eclipse/ gmx-plugins
call updateSite http://regex-util.sourceforge.net/update/ regex
call updateSite http://ucdetector.sourceforge.net/update/ ucdetector

goto:eof

:updateSite
java -Dhttp.proxyHost=yourProxy -Dhttp.proxyPort=yourProxyPort -jar %LAUNCHER% -application org.eclipse.update.core.standaloneUpdate -command mirror -from %1 -mirrorUrl http://server/eclipseupdatesite/%2 -to Y:\%2 &goto:eof
goto:eof
This gives us multiple update sites under http://server/eclipseupdatesite/ like http://server/eclipseupdatesite/m2eclipse etc. Of course you still need 1 computer to have unrestricted/fast internet access, but you can always create those sites at home.

Aggregating Specific Features

You can also aggregate several features from other update sites to your own, using either p2.mirror, p2 Composite Repositories, b3, or Nexus Pro.

See http://stackoverflow.com/questions/4378112/p2-repositories-aggregator


Sources:

  1. http://dev.eclipse.org/newslists/news.eclipse.platform/msg29529.html
  2. http://stackoverflow.com/questions/4378112/p2-repositories-aggregator
  3. http://www.willianmitsuda.com/2007/03/09/mirroring-callisto-update-site/
  4. http://www.denoo.info/2009/09/mirroring-eclipse-update-sites/

Making Software Literate: The Parser, The Interpreter, and The Literal

In my last post, the title suggests "teaching code to read itself". However all I wrote was about Interpreter.

For code to read itself, it must be able to generate a model from itself. Therefore, you need the metamodel of the programming lanaguage the software is written in, and the grammar of that language. By using those two ingredients and the proper tools, you can generate a model from the software.

An example of a comprehensive tool for doing this is Eclipse MoDisco. It comes with a complete tooling for discovering / reflecting / introspecting a Java project.

However, for software to understand the model, it must also have an Interpreter/Evaluator, which can do something with the model.

In a way, a generator is a specialized kind of interpreter which simply outputs the concrete representation ("artifact") of the model being processed.

To not only read a model but also to make changes to it, we need a Manipulator (what a scary name), which is a kind of interpreter that performs actions on a model. Sample action: delete an EClass node named 'Car'.

After making changes, the resulting model can be generated back to file artifacts using generator. The project can then be rebuilt, only the changed artifacts are needed.

To rebuild the project from scratch though, we need a complete set of project files.

A typical software project, not only consists of a single language (hence metamodel) but several other artifacts including:
- build.xml (Ant build)
- plugin.xml, MANIFEST.MF (PDE project)
- pom.xml (Maven project)
- build.gradle (Gradle build)
- .project, .classpath, build.properties (Eclipse JDT project)

Depending on requirements, it may not be needed (sometimes not even desirable) to model all of those artifacts properly. Sometimes, it's enough to model a file as a 'Literal':

File: EClass
-----------------------
name: EString
directory: EString
contents: EString

Which in practice means, that these artifacts are not part of the model-driven lifecycle at all. (i.e. You can actually ignore it and it won't evenmatter)

Model-driven is all about transformation, or processing, or shapeshifting, or (meta)morphing. If an artifact or model stays the same throughout the lifecycle, and it's not being used as a metamodel for transformation of its instances, then it's the same thing as if modeling is not used at all.

When all project artifacts are understood, 'literalled', or generated, the project can be rebuilt from scratch using the model. With a good headless build system such as Maven or Gradle, this should be simple.

The other part to include is the information "in programmer's head". We deal with this everyday that it seldom occurs to us that it *is* information.

Things like:
- the directory location of the project
- project name
- location and name of the generated binaries
- SCM user, password, URL
- SCM tool
- test status (whether the tests pass, how many tests, how many fails, how many passes)

These information should be modeled, and a specialized interpreter can be created to act on the project.

A final behavior is 'replaceSelf', which requires the following information:
1. self source location
2. self binary location
3. self descriptor model
4. location of *this* self descriptor model
5. prototype source location
6. prototype binary location
7. prototype descriptor model

where 'prototype' is the project that we've discussed and built above.

The replaceSelf behavior, given the 'self descriptor model' will update/replace itself using the prototype locations, and also update the self descriptor model (e.g. Update the version number).

If the software runs continuously (as a server/daemon), it can then fork a new version of itself, then terminate its own instance.

I guess now the lifecycle is complete. ;-)

Making Software Literate: Teaching Code to Read Itself

My previous post introduces the concept of Grammar, Template, and the Metamodel.

Creating metamodel is easy for humans, there is Ecore editor and Ecore Tools for that.

Generating artifacts from a model is also relatively easier... When compared to "reading" artifacts and generating a model.

In the machine world, almost everything is harder to read than to write.

For example, it is easy for a generator to output this:

lastName = "Irawan"
name = "Hendy " + lastName

However, it's not easy to "read" the above artifact. By that I mean that the parser software should "understand" that 'name' contains "Hendy Irawan".

And it brings us to... Interpreter.

Interpreter is an enhanced parser that knows what to do with a model. Advanced interpreters may have behaviors, but the most commonly used functionality is Evaluating Expressions.

An Evaluator transforms expressions in the metamodel to actual values (which are still expressions, called value expression, but do not need further processing). After the example artifact above are parsed AND interpreted/evaluated, the 'name' object will rightly contain 'Hendy Irawan'.

Grammar and Template's Role in Modeling

Grammar and Template are two essential components for model-driven reflection and generation.

Note that this article relates to high-level, abstract view of Model Driven Engineering. And it's going to be boring and theoretical, but I have to write it down so I won't forget. ;-)

With a Grammar, you can 'read' or understand or introspect or reflect or parse or deconstruct or reverse-engineer or (in a way) deserialize... from a concrete artifact, and generating a model as an output.

With a Template, you can 'write' or generate or create or build or construct or (in a way) serialize... an artifact as an output, from a model.

The third element which is required by both processes, is the metamodel. It's actually a prerequirement.

These three are the core ingredients for model driven engineering.

Now to actually bake these ingredients, you need equipment, the tools. Fortunately these are all provided by Eclipse Modeling Framework projects.

The first ingredient you need to create is the metamodel, which usually means an Ecore model. In later phases, the metamodel is not always the first artifact, it can be a generated artifact.

To read an artifact, you need a tool, the parser, that can read your metamodel. And create a grammar in a language that the tool understands. An example is Xtext with its Xtext grammar language. And Xtext, of course, understands Ecore models as the metamodel.

Given: metamodel + grammar --> Xtext, you will get a Parser.

A parser is an artifact reader that is customized to your grammar and metamodel. Given concrete artifact(s) as input, it generates a model as output.

To write to an artifact, you need a tool, that is a generator, that can read your metamodel. Then you create a template in a language that the generator understands.

Such a generator is Eclipse M2T Xpand, and its Xpand language.

Given metamodel + model + template --> Xpand, you get: (textual) artifact!

Actually, the concrete examples above assume one thing: the other metamodel (the "artifact") is a filesystem with text files.

In an abstract way, all transformations are model-to-model (M2M) transformations, hence conceptually requiring two metamodels, not just one. M2T (generation) and T2M (parsing) are conceptually M2M where one side is textual files, or "textual metamodel".

In order for code to understand itself, it must be able to generate a model from its own code.

In order for code to understand itself, it must be able to generate a model (reverse engineer / discover / import) from its own code.

Its model can then be used to generate code and build it, thus understanding and building itself.

Saturday, December 18, 2010

What You Need to Start Model-to-Text Transformation with Xpand

Setup your Model-driven Development Environment:
  1. Get Eclipse IDE, Modeling distribution.
  2. Install Xpand to Eclipse IDE
  3. Install MWE to Eclipse IDE
  4. Ecore Tools is optional, but it helps you to create your metamodel (Ecore model) using a nicer visual diagram editor.

Now you can create your project:
  1. Create your metamodel (.ecore file). Better just use Ecore Tools, and create .ecorediag diagram alongside .ecore file.
  2. Create a .genmodel from your metamodel.
  3. Generate your Model package and classes from the .genmodel.
  4. Create a sample model (instance).
  5. Open the .ecore model file, right click on a EClass and choose Create Dynamic Instance.
  6. Create a Xpand template.
  7. Create a MWE workflow file (.mwe).
  8. Add necessary plugin dependencies to the MANIFEST.MF.
Now you can run the workflow.

The steps above the high-level steps. It sounds complicated and indeed it can be confusing at first. (the reason why I'm blogging this here is so that I won't get confused in the future) ;-)

If you want the simplest thing, use the New Xpand Project Wizard - Sample Xpand Project and it'll provide up a ready-to-use Ecore+Xpand+MWE  setup for you. :-)

Note: MWE and Xpand package names has changed since moved from openArchitectureWare to Eclipse Modeling Framework umbrella, so be aware of this when following older tutorials.

How to Solve MWE/Xpand Workflow Problems/Errors

What to Check :

1. Check your model file (.xmi). See: http://spring-java-ee.blogspot.com/2010/12/fixing-eclipse-mwe-error-workflow.html

2. Check your plugin.xml file, make sure it contains extension on org.eclipse.emf.ecore.generated_package . Sample:

   <extension point="org.eclipse.emf.ecore.generated_package">
      <package
            uri="http://www.bippo.co.id/shop/3.0/magentoconfig/1.0"
            class="id.co.bippo.magento.config.MagentoConfigPackage"/>
   </extension>

3. Check that you've (re-)generated your model classes. Open your .genmodel and regenerate the Model.

Fixing Eclipse MWE Error: Workflow interrupted. Reason: Couldn't load resource under platform:/resource/*.xmi : org.eclipse.emf.ecore.xmi.PackageNotFoundException: Package with uri 'http://*' not found.

If you ever tried to run an Eclipse MWE Workflow, most likely to do M2T (Model-to-Text) transformation using Xpand from an EMF Ecore model to text files, you'll surely encounter several errors at first (or later).

For my case, the error is as follows:

0    INFO  WorkflowRunner     - --------------------------------------------------------------------------------------
7    INFO  WorkflowRunner     - EMF Modeling Workflow Engine 1.0.0, Build v201008251122
7    INFO  WorkflowRunner     - (c) 2005-2009 openarchitectureware.org and contributors
7    INFO  WorkflowRunner     - --------------------------------------------------------------------------------------
8    INFO  WorkflowRunner     - running workflow: /home/ceefour/project/Bippo/modeling_workspace/id.co.bippo.models/src/workflow/makegradle.mwe
8    INFO  WorkflowRunner     -
542  INFO  StandaloneSetup    - Registering platform uri '/home/ceefour/project/Bippo/modeling_workspace'
686  INFO  CompositeComponent - Reader: Loading model from platform:/resource/id.co.bippo.models/src/demo.bippo.co.id.xmi
780  ERROR WorkflowRunner     - Workflow interrupted. Reason: Couldn't load resource under platform:/resource/id.co.bippo.models/src/demo.bippo.co.id.xmi : org.eclipse.emf.ecore.xmi.PackageNotFoundException: Package with uri 'http://www.bippo.co.id/shop/3.0/magentoconfig/1.0' not found. (platform:/resource/id.co.bippo.models/src/demo.bippo.co.id.xmi, 2, 312)

This seemed like the dreaded platform:/resource URI common pitfall.

However, in my case the problem was inside the .xmi model file itself :

xsi:schemaLocation="http://www.bippo.co.id/shop/3.0/magentoconfig/1.0 ../metamodel/magento-config.ecore"

I was moving the metamodel (Ecore model) to another folder/package, however the XMI file's schemaLocation was hardcoded to a specific path.

Fix the schemaLocation, and it would work fine.

Wednesday, December 15, 2010

Gradle Build Script (build.gradle) for Writing Gradle Plugins

Gradle is a great build system written in Groovy programming language and that uses build scripts written in Groovy as well.

There are several ways to write a Gradle plugin and the most extensible one (also the most complicated) is to build the plugin as a separate project.

I tried to find documentation on how to do this but I can only find actual guide for writing "embedded" Gradle plugins (the plugins are contained inside buildSrc folder). But to move the plugin out of buildSrc folder and make it independent, you can use a build.gradle build script file like this in your plugin:

apply { plugin 'java' plugin 'groovy' plugin 'maven' plugin 'eclipse' } group = 'gradle-plugin-javafx' version = '0.2.0-SNAPSHOT' dependencies { compile gradleApi() groovy localGroovy() }
Change group and version properties with your own.

Also, you can leave out applying the 'java', 'eclipse', and 'maven' plugins if you don't use them.

Source: http://code.google.com/p/gradle-plugin-javafx/source/browse/build.gradle

Thursday, December 9, 2010

BREAKING: The ASF Resigns From the JCP Executive Committee

It's not that often that we associate Java with "agile" (not that you can't be agile with Java, mind you!)... But the past two years we've been seeing some very major "breaking news" type events around the Java ecosystem. Not only the release of Java EE 6 specification, but also the acquisition of Sun to Oracle and the chaos that resulted...

The ASF Resigns From the JCP Executive Committee

The Apache Software Foundation has resigned its seat on the Java SE/EE Executive Committee.  Apache has served on the EC for the past 10 years, winning the JCP "Member of the Year" award 4 times, and recently was ratified for another term with support from 95% of the voting community.  Further, the project communities of the ASF, home to Apache Tomcat, Ant, Xerces, Geronimo, Velocity and nearly a 100 mainstay java components have implemented countless JSRs and serve on and contribute to many of the JCPs technical expert groups.

We'd like to provide some explanation to the community as to why we're taking this significant step.

The recent Java SE 7 vote was the last chance for the JCP EC to demonstrate that the EC has any intent to defend the JCP as an open specification process, and demonstrate that the letter and spirit of the law matter.   To sum up the issues at stake in the vote, we believe that while continuing to fail to uphold their responsibilities under the JSPA, Oracle provided the EC with a Java SE 7 specification request and license that are self-contradictory, severely restrict distribution of independent implementations of the spec, and most importantly, prohibit the distribution of independent open source implementations of the spec.  Oracle has refused to answer any reasonable and responsible questions from the EC regarding these problems.

In the phrase "fail to uphold their responsibilities under the JSPA", we are referring to Oracle's refusal to provide the ASF's Harmony project with a TCK license for Java SE that complies with Oracle's obligations under the JSPA as well as public promises made to the Java community by officers of Sun Microsystems (recently acquired by Oracle.)  This breach of the JSPA was begun by Sun Microsystems in August of 2006 and is a policy that Oracle explicitly continues today.  For more information on this dispute, see our open letter to Sun Microsystems (LINK).

This vote was the only real power the Executive Committee has as the governing body of the Java specification ecosystem, and as we indicated previously (LINK) we were looking for the EC to protect the rights of implementers to the degree they are able, as well as preserve the integrity of the JCP licensing structure by ensuring that JCP specifications are able to be freely implemented and distributed.  We don't believe this is an unreasonable position - it should be noted that the majority of the EC members, including Oracle, have publicly stated that restrictions on distribution such as those found in the Java SE 7 license have no place in the JCP - and two distinguished individual members of the EC, Doug Lea and Tim Peierls, both have resigned in protest over the same issue (LINKS).

By approving Java SE 7, the EC has failed on both counts : the members of the EC refused to stand up for the rights of implementers, and by accepting Oracle's TCK license terms for Java SE 7, they let the integrity of the JCP's licensing structure be broken.

The Apache Software Foundation concludes that that JCP is not an open specification process - that Java specifications are proprietary technology that must be licensed directly from the spec lead under whatever terms the spec lead chooses; that the commercial concerns of a single entity, Oracle, will continue to seriously interfere with and bias the transparent governance of the ecosystem;  that it is impossible to distribute independent implementations of JSRs under open source licenses such that users are protected from IP litigation by expert group members or the spec lead; and finally, the EC is unwilling or unable to assert the basic power of their role in the JCP governance process.

In short, the EC and the Java Community Process are neither.

To that end, our representative has informed the JCP's Program Management Office of our resignation, effective immediately.  As such, the ASF is removing all official representatives from any and all JSRs. In addition, we will refuse any renewal of our JCP membership and, of course, our EC position.

. . .

I seriously hope this is for the better future. But I fear this is just one or the first of the many casualties to come . . . :-(

Fasten your seatbelt, Java devs!

Wednesday, December 8, 2010

Installing Ecore Tools Graphical Visual Editor for EMF Models

The default tree-style Ecore model editor provided by Eclipse IDE is boring. At least during Helios release, there is another, sexier option which is Ecore Tools (Incubation project).

I think it's bundled with the "Eclipse IDE Modeling Edition" download but in any case, you can always install it into your Eclipse IDE or an Eclipse RCP Application by using the Eclipse Update Manager:

  1. Go to Help > Install New Software...
  2. Choose Work With > Helios - http://download.eclipse.org/releases/helios
  3. Search for: Ecore Tools SDK (Incubation)
After installing Ecore Tools and restarting Eclipse, here's how to use it:
  1. Right click an .ecore file in Project Explorer view, and and choose Initialize Ecore Diagram File... from the context menu.
  2. Open the .ecorediag file
Wow! Now I get a full-featured diagram editor for my EMF/Ecore models (i.e. Java Classes).

Then you can process your models, for example generate Java class implementations/interfaces, and do other powerful things using EMF tooling.

Wednesday, December 1, 2010

Activiti 5.0 Released - Open Source BPM Suite

Activiti is a new open source BPM (Business Process Management) suite jointly created by Alfresco, Camunda, Atos Origin, Signavio, and other contributors including SpringSource.

Allow me to reproduce their announcement below :


Today we are very proud to announce the first official release for General Availability (GA) of Activiti. In less then 9 months after we left the jBPM team, we've built a broad collaborating community and together we've build the next generation BPM Platform and an astonishing feature list.

There are a couple of crucial decisions Alfresco took when launching Activiti that made these spectacular results possible. First the combination of the liberal Apache license with the new BPMN 2.0 standard rocks. 2 of the community companies are actually in the BPMN 2.0 specification: Alfresco and Camunda.
Timing of the standard and this new project has very good to us as well. Alfresco's gave us the opportunity as ex-jBPM founders to build Activiti as a separate brand and run it as an independent project. That really has been a boost to build this broad community quickly. Given that this strategy has played out even beyond our initial high expectations, we believe we're in for a profound impact on the BPM world.

Here's an overview of what's in this first final release:


Activiti Engine
  • Easy embeddable (just include the .jar)
  • Excellent Spring integration (contributed by SpringSource)
  • Support for all common BPMN 2.0 elements
  • Easy to link any type of Java to process steps
  • Event listeners
  • Transactional timers
  • Audit trails
  • Flexible transaction management
  • Extremely fast / minimal execution overhead
  • Full Query API
  • REST interface

Activiti Explorer
  • Easy task management
  • Starting new process instances
  • Claiming group tasks
  • Starting processes andcompleting tasks with or without forms
  • Easy deployment of forms with processes

Activiti Probe

  • Operational management console
  • Managing deployment
  • Business archive file upload
  • Managing jobs
  • View database table contents

Activiti Designer
  • Contributed by Tijs, Ron, Tiese and Yvo from Atos Origin
  • Eclipse plugin
  • New Activiti project and diagram wizzards
  • Graphical process modeling
  • Form support for Activiti extensions
  • Pluggable activity types! Fully documented!
  • Unit test generation
  • Validation with errors showing in Eclipse Problem view

Activiti Cycle
  • Contributed by Camunda
  • BPM collaboration done right
  • Spans business users, developers and system admins
  • Repos: Activiti Modeler, SVN, JIRA, File system
  • Linking of artifacts in repos
  • Pluggable actions depending on the artifact type

Activiti Modeler
  • Contributed by Signavio
  • Web based graphical BPMN 2.0 authoring
  • Saves models in a shared file based repository
  • Very intuitive to use!


Other integration contributions

Special thanks to Next Level Integration for hosting the continuous integration hudson service!
We're also very excited about the Manning book for which the early access program will start real soon. Watch out for Activiti in Action by Tijs Rademakers en Ron van Liempd.

But all this just means that you can start using Activiti now and that we can get started on the 5.1 ;-)
Download Activiti
Here are the instructions to get up and running in less then a minute.
What are you waiting for ?!

Thursday, November 25, 2010

Referential Integrity - Good or Bad?

In relational database world such as MySQL, PostgreSQL, Derby / JavaDB, and HSQLDB RDBMS there is Referential Integrity.

It's very useful to avoid consistency mistakes with foreign keys during operation.

It's useful when we live in *relational* world. But during development of a modular application and agile, frequent upgrades.. Is referential integrity helping or hindering productivity?

Consider an application that has a Customer table that has a column an refers to a Country table. Each deployment would have its own Customer table data. However, Country table is pretty much "shared" globally. Country table is never meant to be modified from the admin's perspective.

When there is a new country or modification, new version of application will be released, containing an update to Country table.

In old-school style of upgrades, Country table should be replaceable similar to how we replace files during upgrade, i.e. overwriting a file called countries.xml.

However, due to referential integrity, it's not possible to simply drop the table, recreate it with the data. We have to issue proper DML SQL statements to update the data from the "current" version (yes, we must detect what is the current version) to the new version.

All in the name of not breaking foreign key checks aka referential integrity.

Isn't RDBMS making simple things complex?

Monday, November 22, 2010

Deploying Eclipse BIRT Web Viewer to GlassFish 3.0.1 on Ubuntu 10.10

Eclipse BIRT is free / open source reporting engine for Java.

A commercial BIRT Report Server is available from Actuate (the company behind Eclipse BIRT). While Eclipse BIRT does not provide a free/open source reporting server, the BIRT Runtime provides a simple Eclipse BIRT Web Viewer.

Eclipse BIRT Web Viewer installation instructions for several Java EE application servers are here.

Here I share my own experience installing Eclipse BIRT 2.6.1 Web Viewer under GlassFish 3.0.1 Java EE Application Server :

  1. Install package sun-java6-jdk dan ttf-mscorefonts-installer
    Package ttf-mscorefonts-installer contains Microsoft fonts needed by some reports.

  2. Install GlassFish 3.0.1+.
    When asked for JVM, enter: /usr/lib/jvm/java-6-sun
Run GlassFish. In Terminal, go to GlassFish directory and type:
bin/asadmin start-domain

Default GlassFish domain is : domain1

It will show something like:

tuneeca@geulis:~/glassfishv3$ bin/asadmin start-domain
Waiting for DAS to start ......
Started domain: domain1
Domain location: /home/tuneeca/glassfishv3/glassfish/domains/domain1
Log file: /home/tuneeca/glassfishv3/glassfish/domains/domain1/logs/server.log
Admin port for the domain: 4848
Command start-domain executed successfully.

Now that GlassFish is running, you need to deploy Eclipse BIRT Web Viewer:
  1. Download birt-runtime-*.zip from BIRT Downloads under BIRT Runtime.
    Inside this archive there is birt.war file.
  2. Deploy birt.war through GlassFish admin ( http://localhost:4848/ ) or by copying birt.war to folder glassfish/domains/domain1/autodeploy/
    In the log file glassfish/domains/domain1/server.log (to follow this file, use tail -f ) you should see something like :
    [#|2010-11-22T16:55:54.411+0700|INFO|glassfish3.0.1|javax.enterprise.system.tools.deployment.org.glassfish.deployment.common|_ThreadID=23;_ThreadName=Thread-1;|[AutoDeploy] Selecting file /home/tuneeca/glassfishv3/glassfish/domains/domain1/autodeploy/birt.war for autodeployment.|#]

  3. Check BIRT Web Viewer is running at : http://localhost:8080/birt/
You will need two additional libraries, Apache Commons Logging and JDBC Driver for your database.

Commons Logging

Without this, you'll get an error: java.lang.NoClassDefFoundError: org.apache.commons.logging.LogFactory
birt.war needs Apache Commons Logging.

Copy the file commons-logging-1.1.1.jar to folder glassfish/domains/domain1/applications/birt/WEB-INF/lib
Then reload the webapp :

touch glassfish/domains/domain1/applications/birt/.reload

MySQL JDBC Driver

If you get an exception like:

An exception occurred during processing. Please see the following message for details:
Cannot open the connection for the driver: org.eclipse.birt.report.data.oda.jdbc.dbprofile.
    org.eclipse.datatools.connectivity.oda.OdaException ;
    java.lang.ClassNotFoundException: com.mysql.jdbc.Driver

For MySQL:
Copy file mysql-connector-java-5.1.13.jar to glassfish/domains/domain1/applications/birt/WEB-INF/platform/plugins/org.eclipse.birt.report.data.oda.jdbc_[version]/drivers

For other databases, copy the appropriate JDBC driver(s).

Go to: http://localhost:8080/birt/

Now you should be able to run BIRT reports over the web, on GlassFish!

Sunday, November 21, 2010

Compact anonymous inner classes in Scala

Scala programming language has a much more compact syntax for anonymous inner classes.

This code in Java:

import com.vaadin.ui.*;

home.addComponent(new Button("Manage Users", new Button.ClickListener() {
@Override
public void buttonClick(Button.ClickEvent event) {
panel.setContent(userManagementLayout);
}
}));

becomes this in Scala:

import com.vaadin.ui._

home.addComponent(new Button("Manage Users", (event: Button#ClickEvent) =>
    panel.setContent(userManagementLayout);
)));

Vaadin TouchKit and Google App Engine = Session expired / Out of sync errors?

I've been developing with Vaadin, Vaadin TouchKit (for Mobile UI), and Google App Engine... and getting a lot of these messages:

Out of sync Something has caused us to be out of sync with the server.
Take note of any unsaved data, and click here to re-sync.

Also session expired messages......

Any ideas / solutions / workarounds ?

I'm considering switching to JSF 2.0 + PrimeFaces ...

Saturday, November 20, 2010

Gradle Build for Spring Framework and SLF4J without Apache Commons Logging

Gradle build system supports Transitive Dependency Management.

It's very useful when you depend on a library, say Spring Framework, that uses Apache Commons Logging (artifact commons-logging:commons-logging), but you want to use another library like SLF4J and want to exclude Commons Logging.

Here is the Gradle build to exclude the commons-logging transitive dependency.

repositories {
mavenCentral()
}

configurations {
compile
runtime
all*.exclude group: 'commons-logging' // this is where we exclude
}

dependencies {
compile group: 'org.springframework', name: 'spring-web', version: '3.0.5.RELEASE'
compile group: 'org.slf4j', name: 'slf4j-api', version: '1.6.1'
runtime group: 'org.slf4j', name: 'slf4j-jdk14', version: '1.6.1'
runtime group: 'org.slf4j', name: 'jcl-over-slf4j', version: '1.6.1'
}

// Example usage: Copy dependencies
// e.g. to the Google App Engine WEB-INF/lib directory
task copyDependencies << {
copy {
from configurations.compile
from configurations.runtime
into 'war/WEB-INF/lib'
}
}

Typecasting in Scala

In Java :

Car car = (Car)Vehicle.factory(Car.class);

(Of course, the above is a really bad example. But I can't think of a better one right now)

In Scala :

var car = Vehicle.factory(classOf[String]).asInstanceOf[Car];

So, there are two core class functions in Scala: classOf[C] and asInstanceOf[C]

Wednesday, November 17, 2010

Scala IDE Eclipse Plugin for Scala 2.8.1 Final Released

After four rounds of release candidates, Scala 2.8.1.final has been released, fixing a large number of bugs, and introducing some enhancements, in particular in Scaladoc.

Perhaps most importantly, this new release is completely binary compatible with 2.8.0.final. As has been the rule, it’s still necessary to update the Scala tooling for Eclipse to take advantage of the new Scala compiler and library version. Thankfully, the aforementioned binary compatibility and the continued availability of the 2.8.0.final version of the SDT will make this process less painful than it has been previously — you can either update to the 2.8.1.final version immediately, or continue with 2.8.0.final and update to 2.8.1.final when it’s convenient for your project, whilst still being able to take advantage of ongoing improvements in the Eclipse tooling.

This is the first time that this has been possible — enabling smoother version transitions for users of the Scala tooling was one of the primary goals of the Scala IDE for Eclipse becoming an independent project and it is very satisfying to see that this is working out in practice. One of the project’s continuing goals is to ensure that Eclipse users won’t again have to choose between sticking with a particular stable release of the main Scala toolchain versus benefiting from improvements in the Eclipse integration.

The version of the Scala tooling that is offered on the front page of scala-ide.org by default has been bumped up to 2.8.1.final, and you can continue to update and install for 2.8.0.final using the alternative build update sites available here.

(from the announcement)

Scala 2.8.1 Final Version Released

A new stable release of Scala is available for download!

Many thanks to all our contributors and testers.

You can find the new Scala 2.8.1 on our Download Page.

This new release addresses a large number of bugs, and introduces some additional improvements, noticeably in the Scaladoc tool. The new Scala 2.8.1 has been designed to be fully binary compatible with the previous version 2.8.0.

(From the announcement)

Thursday, October 28, 2010

Dependency Injection in PHP vs Java

How to do Dependency Injection in PHP vs Java.

Plain PHP:

$helper_sales = new HelperSales();

Magento proprietary SPI:

// no type information!
$helper_sales = Mage::helper('sales');

Java / CDI :

@Inject
private HelperSales helperSales;

Java / get bean from Spring context :

HelperSales helperSales = appCtx.getBean(HelperSales.class);

The Java examples apply to Scala as well, of course.

Still envy PHP?

Sunday, October 24, 2010

PrimeFaces supports Bean Validation (JSR-303) in JSF 2.0

I've found (at least) one thing PrimeFaces / JSF 2.0 does something better than Vaadin when it comes to RIA (Rich Internet Web Applications), and that is Bean Validation (JSR-303) support.

The following Java code:
@NotNull

@Size(min=1) private String surname;
will automatically become validated JSF components in your XHTML/VDL file.

Cool!

Displaying AJAX Tables in PHP vs Java EE: ZFDataGrid and PrimeFaces DataTable

While developing with PHP + Zend Framework + Doctrine I missed an easy way to display/edit data using a grid/table.

A very useful component I found is ZFDataGrid.

Here's a sample code of how to use ZFDataGrid:

    function simpleAction()

    {         //Zend_Config         $config = new Zend_Config_Ini('./application/grids/grid.ini', 'production');                 //Grid Initialization         $grid = Bvb_Grid::factory('Bvb_Grid_Deploy_Table', $config, 'id');                 //Setting grid source         $grid->setSource(new Bvb_Grid_Source_Zend_Table(new Bugs()));                 //CRUD Configuration         $form = new Bvb_Grid_Form();         $form->setAdd(true)->setEdit(true)->setDelete(true);         $grid->setForm($form);                 //Pass it to the view         $this->view->pages = $grid;         $this->render('index');     }
It looks pretty good too.

Check the ZFDataGrid Live Demo here.

However, working with data grids using JSF 2.0 and PrimeFaces felt much more natural and easier.

Here's a sample code using PrimeFaces' DataTable :

<h:form>

    <p:dataTable var="car" value="#{tableBean.lazyModel}" paginator="true" rows="10" lazy="true"
                 paginatorTemplate="{RowsPerPageDropdown} {FirstPageLink} {PreviousPageLink} {CurrentPageReport} {NextPageLink} {LastPageLink}"
                 rowsPerPageTemplate="5,10,15"
                 selection="#{tableBean.selectedCar}" selectionMode="single"
                 onRowSelectComplete="carDialog.show()" onRowSelectUpdate="display">
        <f:facet name="header">
            Displaying 100,000,000 Cars
        </f:facet>
        <p:column headerText="Model">
            <h:outputText value="#{car.model}" />
        </p:column>
        <p:column headerText="Year">
            <h:outputText value="#{car.year}" />
        </p:column>
        <p:column headerText="Manufacturer">
            <h:outputText value="#{car.manufacturer}" />
        </p:column>
        <p:column headerText="Color">
            <h:outputText value="#{car.color}" />
        </p:column>
    </p:dataTable>

    <p:dialog header="Car Detail" widgetVar="carDialog" resizable="false"
              width="200" showEffect="explode" hideEffect="explode">
        <h:panelGrid id="display" columns="2" cellpadding="4">
            <f:facet name="header">
                <p:graphicImage value="/images/cars/#{tableBean.selectedCar.manufacturer}.jpg"/>
            </f:facet>
            <h:outputText value="Model:" />
            <h:outputText value="#{tableBean.selectedCar.model}"/>

            <h:outputText value="Year:" />
            <h:outputText value="#{tableBean.selectedCar.year}"/>

            <h:outputText value="Manufacturer:" />
            <h:outputText value="#{tableBean.selectedCar.manufacturer}"/>

            <h:outputText value="Color:" />
            <h:outputText value="#{tableBean.selectedCar.color}"/>
        </h:panelGrid>
    </p:dialog>

</h:form>

The above code may look verbose, but it packs a lot of functionality and it's very easy and intuitive to customize.
When you click a row it displays a nice dialog with a picture. Furthermore, it's actually lazy loading 100,000,000 rows!! (yes, ONE HUNDRED MILLION ROWS)

Here's how it looks:

You can see for real the PrimeFaces DataTable Lazy-loading Live Demo here.

It's very easy to add lazy-loading support to DataTable:

        lazyModel = new LazyDataModel<Car>() {

/**
* Dummy implementation of loading a certain segment of data.
* In a real application, this method should load data from a datasource
*/
@Override
public List<Car> load(int first, int pageSize, String sortField, boolean sortOrder, Map<String,String> filters) {
logger.log(Level.INFO, "Loading the lazy car data between {0} and {1}", new Object[]{first, (first+pageSize)});

                //Sorting and Filtering information are not used for demo purposes just random dummy data is returned

List<Car> lazyCars = new ArrayList<Car>();
populateLazyRandomCars(lazyCars, pageSize);

return lazyCars;
}
};

        /**
         * In a real application, this number should be resolved by a projection query
         */
        lazyModel.setRowCount(100000000);

Not to disrespect PHP or ZFDataGrid in anyway (I still need to use them for some of my work), but the experience with JSF 2.0 and PrimeFaces wins hands down. I think it's more because of PrimeFaces than JSF 2.0, but they're such a powerful combo (compared to if you use PrimeFaces with JSF 1.2).

I do hope that PrimeFaces provide a utility class that implements LazyDataModel for a Hibernate/HQL or JPA/JPQL query, but for now I can live with the above.

Vaadin on Google App Engine part 1: Setting up the Development Environment

Vaadin is such a nice Java web RIA application framework for building desktop-like apps, built on top of GWT AJAX Library. Vaadin uses Java to programmatically create UI components and there's very minimal CSS / HTML involved, much less JavaScript.

In this article series I will share my experiences on developing a Vaadin web application and deploying it to Google App Engine hosting.

Prepare the Vaadin + Google Development Environment


First you need to prepare your development environment.

  1. Install Eclipse IDE 3.6SR1 (Helios) - Java EE edition
  2. Install Vaadin Eclipse Plugin
  3. Install Google Plugin for Eclipse
  4. Download the latest Google App Engine SDK . This step is optional because Google Plugin for Eclipse can also download GAE SDK and GWT SDK for you.

Useful Reading

Vaadin on Google App Engine part 2: Creating the Web Application Project

To create a Vaadin web application on Google App Engine, in Eclipse IDE click File > New > Project... and create a new Vaadin project.

Make sure you choose Google App Engine as the Deployment configuration.

Then enable Google App Engine nature in your project:

  1. Right click your project > Properties...
  2. Go to Google > App Engine
  3. Check the Use Google App Engine checkbox
  4. Pick the Google App Engine SDK that you use or click Configure SDKs... if necessary

To ensure that JDO enhancement by DataNucleus access platform works correctly, do the following workaround:
  1. Right click your project > Properties...
  2. Go to Java Build Path
  3. Remove Web App Libraries
  4. Click Add JARs..., and select the war/WEB-INF/lib/vaadin-x.x.x.jar
Now your Vaadin application should run fine. You can also start adding JDO entities with proper DataNucleus JDO code enhancement.

Vaadin on Google App Engine part 1: Setting up the Development Environment

Vaadin is such a nice Java web RIA application framework for building desktop-like apps, built on top of GWT AJAX Library. Vaadin uses Java to programmatically create UI components and there's very minimal CSS / HTML involved, much less JavaScript.

In this article series I will share my experiences on developing a Vaadin web application and deploying it to Google App Engine hosting.

Prepare Vaadin + Google App Engine Development Environment


First you need to prepare your development environment.

  1. Install Eclipse IDE 3.6SR1 (Helios) - Java EE edition
  2. Install Vaadin Eclipse Plugin
  3. Install Google Plugin for Eclipse
  4. Download the latest Google App Engine SDK . This step is optional because Google Plugin for Eclipse can also download GAE SDK and GWT SDK for you.

Useful Reading

Eclipse Helios In Action: Modeling with Acceleo and Xtext

The Eclipse Modeling Project is one of the most active projects within the Eclipse community. Ed Merks will give a quick overview of the Modeling projects in Eclipse IDE 3.6 Helios. Then Cedric Brun will demo Acceleo and Sebastian Zarnekow will show Xtext.

This presentation was recorded as part of the Helios In Action virtual conference: eclipse.org/​helios/​heliosinaction.php.

Presented by Ed Merks, Cedric Brun of Obeo and Sebastian Zarnekow of itemis

See the webinar recording video here: Helios In Action: Modeling