Wednesday, June 22, 2011

EJB 3.1 Cookbook

 

Recently I read new release book titled “EJB 3.1 Cookbook” by Richard M.Reese. http://www.packtpub.com/ejb-3-1-cookbook/book

If you wonder why I read this book instead of .NET book, Please find it on this post. =D

This book talks about building real world EJB solutions with a collection of simple but incredibly effective recipes and here a list of the overview of this book.

  • Build real world solutions and address many common tasks found in the development of EJB applications
  • Manage transactions and secure your EJB applications
  • Master EJB Web Services
  • Part of Packt's Cookbook series: Comprehensive step-by-step recipes illustrate the use of Java to incorporate EJB 3.1 technologies

Enterprise Java Beans enable rapid and simplified development of secure and portable applications based on Java technology.Creating and using EJBs can be challenging and rewarding. Among the challenges are learning the EJB technology itself, learning how to use the development environment you have chosen for EJB development, and the testing of the EJBs.

This EJB 3.1 Cookbook addresses all these challenges and covers new 3.1 features, along with explanations of useful retained features from earlier versions. It brings the reader quickly up to speed on how to use EJB 3.1 techniques through the use of step-by-step examples without the need to use multiple incompatible resources. The coverage is concise and to the point, and is organized to allow you to quickly find and learn those features of interest to you.

The book starts with coverage of EJB clients. The reader can choose the chapters and recipes which best address his or her specific needs. The newer EJB technologies presented include singleton beans which support application wide needs and interceptors to permit processing before and after a target method is invoked. Asynchronous invocation of methods and enhancements to the timer service are also covered.

The EJB 3.1 CookBook is a very straightforward and rewarding source of techniques supporting Java EE applications.

What you will learn from this book :

  • Create and use the different types of EJBs along with the use of the optional session bean business interface
  • Create a singleton session bean for application-wide use
  • Use declarative and programmatic techniques for security, timer services, and transaction processing
  • Use asynchronous session beans to complement message driven beans
  • Support aspect oriented features such as logging and data validation using interceptors
  • Use EJBs in support of message based applications
  • Master the use of deployment descriptors and improved packaging options
  • Use EJBs outside of the Java EE environment using the embeddable container

Approach

Each recipe comprises step-by-step instructions followed by an analysis of what was done in each task and other useful information. The book is designed so that you can read it chapter by chapter, or look at the list of recipes and refer to them in no particular order. It is packed with useful screenshots to make your learning even easier.

Who this book is written for

The book is aimed at Java EE and EJB developers and programmers. Readers should be familiar with the use of servlets in the construction of a web application. A working knowledge of XML is also desirable.

After I read this book I am really grateful to take .NET as my favourite framework. Because since in I was uni when I was first learned EJB, The version was not different. There is slightly changes and less improvement. However overall this book is really great. Easy to read and great coverage of knowledge.

Thursday, June 09, 2011

Html5 histoty API and CSS3 transitions

Check out new HTML5 History API allows us to manage the URL changes while CSS3 transitions handle the sliding. Permalinks are always maintained, your back button works as expected, and it's much faster than waiting for a full page load.

Basically we intercept your click, call pushState() to change the browser's URL, load in data with Ajax, then slide over to it.

$('#slider a').click(function() {
history.pushState({ path: this.path }, '', this.href)
$.get(this.href, function(data) {
$('#slider').slideTo(data)
})
return false
})
view rawslide.jsThis Gist brought to you by GitHub.
When you hit the back button, an onpopstate handler is fired after the URL changes, making it easy to send you "back".

$(window).bind('popstate', function() {
$('#slider').slideTo(location.pathname)
})
view rawunslide.jsThis Gist brought to you by GitHub.

Wednesday, June 08, 2011

TDD Practices

Just read couple good article which I think required for anyone who writing unit testing.

Here couple of screen shot from those article

Concept

Finding bugs (things that don’t work as you want them to) 
=> Manual testing (sometimes also automated integration tests)

Detecting regressions (things that used to work but have unexpectedly stopped working)
=> Automated integration tests (sometimes also manual testing, though time-consuming)

Designing software components robustly
=>Unit testing (within the TDD process)

 

Good unit tests vs bad ones

TDD helps you to deliver software components that individually behave according to your design. A suite of good unit tests is immensely valuable: it documents your design, and makes it easier to refactor and expand your code while retaining a clear overview of each component’s behaviour. However, a suite of bad unit tests is immensely painful: it doesn’t prove anything clearly, and can severely inhibit your ability to refactor or alter your code in any way.

Where do your tests sit on the following scale?

image

Unit tests created through the TDD process naturally sit at the extreme left of this scale. They contain a lot of knowledge about the behaviour of a single unit of code. If that unit’s behaviour changes, so must its unit tests, and vice-versa. But they don’t contain any knowledge or assumptions about other parts of your codebase, so changes to other parts of your codebase don’t make them start failing (and if yours do, that shows they aren’t true unit tests). Therefore they’re cheap to maintain, and as a development technique, TDD scales up to any size of project.

At the other end of the scale, integration tests contain no knowledge about how your codebase is broken down into units, but instead make statements about how the whole system behaves towards an external user. They’re reasonably cheap to maintain (because no matter how you restructure the internal workings of your system, it needn’t affect an external observer) and they prove a great deal about what features are actually working today.

Anywhere in between, it’s unclear what assumptions you’re making and what you’re trying to prove. Refactoring might break these tests, or it might not, regardless of whether the end-user experience still works. Changing the external services you use (such as upgrading your database) might break these tests, or it might not, regardless of whether the end-user experience still works. Any small change to the internal workings of a single unit might force you to fix hundreds of seemingly unrelated hybrid tests, so they tend to consume a huge amount of maintenance time – sometimes in the region of 10 times longer than you spend maintaining the actual application code. And it’s frustrating because you know that adding more preconditions to make these hybrid tests go green doesn’t truly prove anything.

 

Tips for writing great unit tests

Enough vague discussion – time for some practical advice. Here’s some guidance for writing unit tests that sit snugly at Sweet Spot A on the preceding scale, and are virtuous in other ways too.

  • Make each test orthogonal (i.e., independent) to all the others
    Any given behaviour should be specified in one and only one test. Otherwise if you later change that behaviour, you’ll have to change multiple tests. The corollaries of this rule include:
    • Don’t make unnecessary assertions
      Which specific behaviour are you testing? It’s counterproductive to Assert() anything that’s also asserted by another test: it just increases the frequency of pointless failures without improving unit test coverage one bit. This also applies to unnecessary Verify() calls – if it isn’t the core behaviour under test, then stop making observations about it! Sometimes, TDD folks express this by saying “have only one logical assertion per test”.
      Remember, unit tests are a design specification of how a certain behaviour should work, not a list of observations of everything the code happens to do.
    • Test only one code unit at a time
      Your architecture must support testing units (i.e., classes or very small groups of classes) independently, not all chained together. Otherwise, you have lots of overlap between tests, so changes to one unit can cascade outwards and cause failures everywhere.
      If you can’t do this, then your architecture is limiting your work’s quality – consider using Inversion of Control.
    • Mock out all external services and state
      Otherwise, behaviour in those external services overlaps multiple tests, and state data means that different unit tests can influence each other’s outcome.
      You’ve definitely taken a wrong turn if you have to run your tests in a specific order, or if they only work when your database or network connection is active.
      (By the way, sometimes your architecture might mean your code touches static variables during unit tests. Avoid this if you can, but if you can’t, at least make sure each test resets the relevant statics to a known state before it runs.)
    • Avoid unnecessary preconditions
      Avoid having common setup code that runs at the beginning of lots of unrelated tests. Otherwise, it’s unclear what assumptions each test relies on, and indicates that you’re not testing just a single unit.
      An exception: Sometimes I find it useful to have a common setup method shared by a very small number of unit tests (a handful at the most) but only if all those tests require all of those preconditions. This is related to the context-specification unit testing pattern, but still risks getting unmaintainable if you try to reuse the same setup code for a wide range of tests.

    (By the way, I wouldn’t count pushing multiple data points through the same test (e.g., using NUnit’s [TestCase] API) as violating this orthogonality rule. The test runner may display multiple failures if something changes, but it’s still only one test method to maintain, so that’s fine.)

  • Don’t unit-test configuration settings
    By definition, your configuration settings aren’t part of any unit of code (that’s why you extracted the setting out of your unit’s code). Even if you could write a unit test that inspects your configuration, it merely forces you to specify the same configuration in an additional redundant location. Congratulations: it proves that you can copy and paste!

    Personally I regard the use of things like filters in ASP.NET MVC as being configuration. Filters like [Authorize] or [RequiresSsl] are configuration options baked into the code. By all means write an integration test for the externally-observable behaviour, but it’s meaningless to try unit testing for the filter attribute’s presence in your source code – it just proves that you can copy and paste again. That doesn’t help you to design anything, and it won’t ever detect any defects.

  • Name your unit tests clearly and consistently
    If you’re testing how ProductController’s Purchase action behaves when stock is zero, then maybe have a test fixture class called PurchasingTests with a unit test called ProductPurchaseAction_IfStockIsZero_RendersOutOfStockView(). This name describes the subject (ProductController’s Purchase action), the scenario (stock is zero), and the result (renders “out of stock” view). I don’t know whether there’s an existing name for this naming pattern, though I know others follow it. How about S/S/R

    Avoid non-descriptive unit tests names such as Purchase() or OutOfStock(). Maintenance is hard if you don’t know what you’re trying to maintain.