Thursday, October 31, 2013

Athento's capabilities for integration and interoperability [FAQs]

Today, we’ll continue answering some of the questions that our users have sent us.

1. Is it possible to make calls from a .net application to Athento to obtain information that could be used in the application?

This was the question posed to us by our friend José Ángel Pereiras from Header in Barcelona.

José Ángel: the answer is yes. There’s a collection of web services provided by Athento, which are independent from the application that’s being called. Currently, these web services following the REST architecture and have been developed with RESTful (Restlet), the framework for JAVA. Basically, REST allows users to manage and transport resources using HTTP (on the web). In the near future, we will show you how to use Athento's SOAP web services.

Let me just say that we owe you a video that explains how those soon-to-be-developed web services for Athento will work. For now, I’ll explain it using a screen image.

To explore how these web services work, we’re going to use Google’s REST client, which is called POSTMAN.

The service that we’re going to call is:

http://cloud.athento.com/athento/rest/input/capture/uploadDocument/xml
This web service helps us upload documents to the capture program (for example, from other applications). Since it’s a POST service, the parameters won’t be shown in the URL. To send a file to Athento, we’re going to need to indicate several parameters:

  • file: the name the file’s been saved as
  • title: the title of the document as it should be known in Athento
  • fileName: the name of the file
  • mimeType: what type of file it is
  • requestId: this number can be used to identify the operation



While in POSTMAN, we indicate the URL that calls the web service, the parameters and the method. Once that’s all been indicated, click on “Send” so that the call to Athento can be started.

In the lower part of the POSTMAN screen, you’ll see the response to the call to the web service. In this case, Athento informs us that the upload has been completed (OK) and returns the ID number of the document which Athento has saved in its internal repository.

With this case, we’ve seen how to upload documents. However, there are also services that help you get data from documents – for example:

http://capture.athento.com/athento/rest/input/capture/extractCoordinates/xml 
With this web service, we can indicate a word that we know that a document contains, and request that Athento return the physical coordinates of where that word appears in the document.

http://capture.athento.com/athento/rest/input/capture/queryDocument/xml
This web service helps us obtain the document type and the metadata extracted by Athento.

We’re working on having a more complex API that would permit more complex interactions with Athento from any application.

NB: To use Athento web services, you must first be authenticated on the platform as an Administrator. You can check which web services are available by clicking on:

http://capture.athento.com/athento/component.faces?action=ADMINISTRATION_INFO_MENU_ACTION

For more information on how to try out web services for Athento, visit our Documentation Center.
Discover how a smart document capture process works.download it
Share

Tuesday, October 29, 2013

How can I work with SharePoint 2013 and Athento ECM Mobile?

Some of you already know about Athento’s mobile applications. Today, we’re going to talk about Athento ECM Mobile, which is an application which allows users to access the repositories that support CMIS.

A client has asked us the following question regarding the application:

How do I add a SharePoint 2013 server to Athento Mobile?

Step 1: Make SharePoint accessible

Working from a SharePoint Server 2013 console:

Add-PSSnapin Microsoft.Sharepoint.Powershell 
Get-SPWebApplication -IncludeCentralAdministration 

This will permit us to get the URL for SharePoint Central Administration. You should open this URL in a web browser and, from this page, you need to click on: Application Management >>Alternate Access Mappings >> Edit Public URLs.

In the "Alternate Access Mapping Collection" option, choose the site that you want to access, and under "Public URLs", select the URL that accesses the server: (http://servidor or https://servidor) as the "Default".

Step 2: Activate Authentication

You’ll need to go back to the "SharePoint Central Administration" option and enter into the “Security” option. Once there, click on "Specify authentication providers >> Default". From there, you’ll be able to conduct the basic authentication.

Step 3: Activate use of CMIS

In "Site Settings", go to "Site actions>>Manage Features" and click on “Activate CMIS”.


Step  4: Add the Sharepoint 2013 server to Athento ECM Mobile

Click on the “+Add Server” option on your mobile application.



Next, you’ll see a form that helps you configure your access to the repository. You’ll have to provide the following data:

  • Server name: The name used to identify the repository in Athento ECM Mobile.
  • User name: The user’s name in SharePoint
  • Password: The user’s password
  • CMIS URL: http://{sharepoint-server}/_vti_bin/cmis/rest?getRepositories.


Once you’ve filled in the form and clicked on “Accept”, the application will be ready to be used.

Next, we’ll show you a brief video of how Athento ECM Mobile works:


Athento ECM Mobile from Athento on Vimeo.
Discover how a smart document capture process works.download it
Share

Monday, October 28, 2013

TIFF or PDF: Which output format should I choose for my document imaging projects?

In a previous post, we conducted an overview of the two leading output formats, TIFF and PDF and document capture. We already know that those are the two file formats that are the most used in document imaging project and their main characteristics. But when we scan a paper document, which is the best format for us to use?

There are a number of criteria to keep in mind:

  • Conservation:  PDF/A, thanks to the ISO 19005-1:2005 standard, is the best option when it comes time guarantee the longevity of the files subjected to document imaging. 
  • Size: Normal PDFs are a format that take up less space than TIFF files. That, however, changes with PDF/A: those files become larger because they have the source archives embedded in them. With PDF files of images, the size is going to depend on the compression used for the image contained in the PDF file. Regarding “searchable” PDFs, those are typically 10% bigger than the equivalent image. Conclusion: speaking generally, PDFs tend to be of a “lighter” format, but it’s also necessary to consider the class of PDF we’re going to be working with before we draw conclusions. 
  • Search capacity within content: The PDF comes out on top once more, since the TIFF format was created to store images, and not text. Microsoft has developed a searchable TIFF format, but we’re not talking about an industry standard. To be able to search for text within a TIFF image, we’ll need to have an OCR application and for the extracted text to be stored in another manner (a database or other file).
  • Security: Unlike the TIFF format, PDFs permit restricting access by using passwords and other mechanisms.
  • Multiplatforms: Both types are perfectly recognized by UNIX and Windows operating systems. 
  • Metadata: Both systems allow users to store metadata. However, the system behind PDF is more sophisticated, since it permits embedding metadata contained in PDF files in XML format.
  • Rich text: The winner, once again, is PDF: it allows you to include links, annotations, marks, labels and other elements in the file’s content. 
  • Accessibility: Unlike TIFF files, PDF files can be used with access technologies for people with special needs; for example, a screen reader can read a PDF; with TIFF, that isn’t possible. 
  • Quality of presentation and visualization: Both formats can produce these, but TIFF and image PDFs are subject to restrictions on the resolution of the image. In this case, a normal PDF is the best option. There are a number of applications to visualize both types, although the range is wider for PDFs. Regarding on-line visualization, neither of the two formats has native support for web browsers, although the majority of them already contain Adobe Reader to solve this issue. Regarding web browsers, the PDF format does offer the chance for web content optimization. 
Without a doubt, if we pay attention to all of these criteria, the format to go with is PDF. However, not all of these criteria have the same weight with all projects, which means that, in each case, analysis should be carried out. Even if we still decide PDF is the best option of all of them, we also have to decide at the same time which of the PDF formats best meets our needs. 

NB: A fair amount of the information contained in this post has been taken from the document called "TIFF versus PDF for Document Storage".
Discover how a smart document capture process works.download it
Share

Wednesday, October 23, 2013

Better output formats for document imaging

In document capture projects, whether they’re for conducting imaging of documents on paper or capture from mobile units, it’s important to choose a file format that, after the scanning of documents, allows us to save those documents with the highest quality and most information possible. With this in mind, we’ve got two winning formats:
  • TIFF (Tagged Information File Format): These files carry the .tif or .tiff suffixes. TIFF is a 27-year-old creation of Adobe that had the objective of creating a standardized format for document imaging. TIFF is probably the best option for preserving images for more than one reason (there are one or more pages, it supports all means of color coding and many algorithms for document compression), although it has one major drawback: the size of the files. Sharing images in TIFF format probably isn’t the best solution, but capture or document management solutions have options for converting .tiff files in easier-to-carry formats. 
  • PDF (Portable Document Format): The standard for open format, converted into an international standard by the ISO. It’s another one of Adobe’s inventions, and even though it’s a bit younger, it’s more widely used than TIFF is. In order to guarantee the survival and conservation of PDF documents, ISO32000 tells software developers who produce, read or operate with PDF files the characteristics that these files should have. PDF handles multi-page documents and its strongest point is that it allows users to visualize documents independently from the tech environment it was created in, or in which it’s being viewed (multiplatform). There are a lot of different classes of PDFs and the two most important groups are normal PDFs and image PDFs. True PDFs (“normal” PDFs) include formatted text and users can search within the content, or copy and paste text. The second group is image PDFs (Wrapped PDF), which consist of a PDF format that contains an image, generally in TIFF format. Because they’re images within a PDF format, you can’t search in the text or copy/paste text. In this category of PDFs, OCR software is vital for indexing file content, doing searches or extracting data. There’s also a third group called “Searchable” PDFs, which is an image PDF that can have a layer of text added to it. This layer is generated by an OCR motor and offers all of the possibilities that a normal PDF offers.  
In a future post, we’ll explain how to choose between the two formats; today, we just wanted to highlight the two formats that are most commonly used when it comes time to undertake a document imaging project.
Discover how a smart document capture process works.download it
Share

Tuesday, October 22, 2013

Do I have to change my document manager if all I want is Athento for document imaging? [FAQs]

Hey, everyone!

Given the large number of questions that we received during the webinar (especially the ones that didn’t get answered because of time limits), we’ve decided to create a new section of the blog that’s dedicated to anwering questions. From now on, the posts that you see with the [FAQs] marker will be dedicated to these types of entries.

Today, we’ll start with a really interesting question that Marisol Eduardo, from Stracon GyM in Peru, asked us:

Do I have to change my document manager if all I want is Athento for document imaging? 

The answer is no. Athento is smart document management, and that’s why our aim is to offer a solution that completely covers all the needs of ECM and document management in businesses. However, we understand that, for businesses, the complete document management system isn’t just implemented all at once. Rather, it grows according to the needs of the business. For example, not all companies need records management right from the very beginning; the need usually comes up some time in its existence.

So, what is it that Athento offers, exactly? 
  • Modules that are completely integrated and that cover distinct stages of life in our documents.
During the webinar, we first saw the part of Athento that provides smart capture and which can help a document imaging project to evolve. Later, we saw the document manager, in which we stored documents that have already been classified, with the metadata of the documents from which we’ve extracted information. Our document management functionality is the ECM module, which is optional and which only represents a small surcharge in the price of Athento. You can see the characteristics of our ECM module on our website.

But what if I’ve already got a document manager?
  • Athento can help you with your document imaging project without you needing to change your document manager. 
Most of the powerful document managers that are available on the market  (Alfresco, Nuxeo, SharePoint, OpenText, Documentum, etc.) have implemented the CMIS standard, which makes it possible to share information among various content management systems. Athento also has complete implementation of the standard, which allows it to operate among any of those document management programs. What’s more, Athento has its own API y web services, which increase its capacity to communicate with other systems.

Marisol, I hope that that answers your question in some detail, and that many other users find this information to be useful.
Discover how a smart document capture process works.download it
Share

Monday, October 21, 2013

Athento beta 2.0 - first reactions

"I’ve taken a look at the product and, in my opinion, everything is really good. Specific fields, an clear interface with a high degree of usability (indispensible for every type of user to access it), the functionality appears to be fairly high; since it’s easy to understand, the product can be rapidly incorporated into any company’s work, whatever the company does. Regarding the document part, I’d like to highlight that I think it’s interesting to be able to use a “key words” field so that the user is the one to choose the fastest way to index his or her own documents, and, that way, make the documents easily recoverable. To sum up: Well done!"
Fátima Barquero Sánchez
Document Specialist at TVE (Spanish National Television)
LinkedIn profile
Discover how a smart document capture process works.download it
Share

Monday, October 14, 2013

Data validation and the quality of information obtained in capture processes

Normally, when we talk about capture software, functionalities such as the classification of documents or extracting data from documents are the star features. This is normal, given that they’re the two functionalities which allow businesses to obtain information which would otherwise be inaccessible in their documents. 

The levels of precision provided by the results of these processes is subject to a number of factors which aren’t just limited by the power and quality of the software (the quality of the documents to be processed, for example.) In many cases, these documents are images that have been scanned from photocopies of photocopies, and their quality is so poor that even the human eye has problems trying to read the information. Under these conditions, machines and existing technologies can’t do much more than the human eye can. Not getting information, or getting imprecise information, means that the systems using this data are working with mistakes. In the case of invoices, for example, if the extracted data are incorrect (let’s suppose that the invoice total extracted is €500, when it should really be €600), our accounting software is going to process an incorrect amount. That’s where data validation – either manual or automatic - becomes important.

Validating the information obtained by the capture software is one way of guaranteeing the quality of the information before sending it on to feed other systems. 


Data Validation Options for Capture Software

  • Notification for those documents in which data extraction/classification falls below a set security level for accuracy: In other words, if the system isn’t 99% sure about the extraction or classification of a document, it will alert the user. 
  • Help with previewing the document: Being able to zoom in on the document as we’re checking it helps us to locate and identify data in scanned images. 
  • Manual validation: ability for users to correct incorrect data obtained by the system. 
  • Automatic validation: This options permits connections between the systems and databases in which information can be found that can corroborate corresponding data. Let’s say that the name of a patient has been extracted from a clinical report: the system can search for the patient’s name in the hospital’s computer system, checking that the data exists and checking other associated data, such as the patient’s social security number. 
In the case of Athento version 2.0, the system provides validation views for processed documents. These views allow users to correct wrong data, or data that could not be extracted. What’s more, the system also allows users to view the document with the help of zoom (a magnifying glass), so that those responsible for validating data can better see the data. 


Share

Friday, October 11, 2013

Athento creates efficiency by managing more than 40,000 construction plans during the building of a subway system in one of Spain’s biggest cities

Silicon Valley,  October 10th, 2013:  Athento, the smart capture and document management software,  has helped the builders of the new subway system in the Spanish city of Malaga with the publication and distribution of more than 40,000 construction plans.

This month, Yerbabuena Software and the consortium of companies building the Metro de Malaga have made public Athento’s use as the document management system used during the construction of the Metro. The consortium that led the construction, known as UTE Metro Malaga, (made up of several of Spain’s most prominent construction firms, including FCC, Sando, Azvi, Comsa and Vera) have, since 2010, put their trust in Athento as the document management system. Once finished, the Metro will have sixteen kilometers of lines and will provide service to more than seventeen million passengers in its first year alone, thanks to investment in the project of more than €600 million.

Malaga, located in the south of Spain, is a tourist hot spot with substantial population growth predicted to be as strong as that of Madrid and Barcelona. That growth, and the need to have sustainable public transport, helped created a project started in 1999 to provide a subway system to this Spanish city.

The mega-project carried out by this Andalusian city required the coordination of more than 1,600 people, five separate work centers, and exhaustive quality control over the work plans. UTE Metro Málaga needed to count on the security of carrying work out with the most up-to-date plans, especially with the most recent versions of plans, since any element or adjustment that had not been included in the original set of plans would result in out-of-control costs. Since the plans were revised up to seven hundred times per day, working with the most recent set of construction documents was crucial.

Thanks to the dedicated work of the professionals working on the Metro, and the efficiency of the software, more than 40,000 work plans have been managed to date, and this management has been crucial in preventing execution errors in a project that, this year, will cost the city close to €600 million. The other great achievement of this project has been to centralize all the project information, which is used in five separate work centers. Added to this is the advantage of being able to access all of the documentation from any location on a 24/7 basis, from any device. 

Metro de Málaga’s staff are fully aware of the value of this project tool. According to José María Lara, Metro de Málaga’s Manager of Technical Planning, “Always having the latest version of our plans available and accessible to distinct users was of prime importance for the perfect execution and organization of the construction project. Athento let us do that”.

José Luis de la Rosa, CEO of Yerbabuena Software, also adds: “Yerbabuena Software is proud that Athento made its contribution to the construction of the city’s Metro system, and contributing to the quality and efficiency of projects that involve so many resources and people, like the Málaga Metro, is, without a doubt, very valuable to us.”


Athento is a smart capture and ECM software which can help organizations at all levels, and in all sectors, maintain control over their documents, guaranteeing the success of document imaging projects and automating processes thanks to the information that this software is able to obtain from documents from any company. 



About Yerbabuena Software, Inc.:
Yerbabuena Software is made up of a large group of document management software experts and currently has offices in Spain and Silicon Valley, California, in addition to important partnership agreements in countries such as Spain, Argentina, Chile, Peru, Colombia and Mexico. Its Athento product is in charge of managing documents in businesses such as the DIA group, BNP Paribas or Leroy Merlin.


Success Case: Athento helps the builders of the new subway system
Athento allows publication and distribution of more than 40,000 construction plans.
Share

Wednesday, October 9, 2013

Workflows, BPM and Case Management

That businesses are interested in automating processes is nothing new. In fact, analyzing processes is a fairly well-developed field, full of terminology and concepts that, in many cases, tends to be used interchangeably when, in reality, we’re talking about different things. This is the case with these three terms: Workflows, BPM and Case Management. Let’s quickly define them from a technological point of view:

  • Workflows: Also known as “routing”. These are the most basic work flows and stand out not just for their simplicity (they are fairly linear), but also because the tasks that make up the process do not change. One example of this type of flow would be those related to revising and approving that come as part of ECM platforms and document management systems. Even though they’re simple, using these workflows can be very beneficial to businesses, which gain better control over their documentation and the information contained therein.

  • BPM: The initials for Business Process Management. These processes are known for being predictable, but they’re also far more complex than workflows. They don’t tend to be linear: they branch out into distinct paths, depending on certain conditions. This can mean better flexibility; for example, in the case that certain conditions are met, certain tasks within the process can be omitted, or the person in charge can vary the process according to different situations. Automating this type of process requires analyzing the process and, in many cases, re-engineering them. The main advantage of automating these process is, definitely, the improvement in the amount of time needed to resolve processes.

  • Case Management: This refers to processes that cannot be predicted. Generally, with these processes, there are usually one or more knowledge workers involved who have to make decisions about the best action to take, or even end up modifying the process, depending on the case. The decisions made by knowledge workers are subject to explicit guidelines, restrictions and might need the involvement of other people.


In each of the three cases, one of the fundamental elements involved in automating the process is counting on correct, specific information. As an example: automatic recognition of a type of document received in the email allows us to automatically initiate a revision work flow. Extracting the date of a complaint helps us put priority on documents that are going to go beyond the date for a response and extract the name of the client who is filing the complaint, or where the client is located; that helps us put the client in touch with a professional of some sort. All of this information is included in documents: the problem is that getting that information manually is a slow, costly process. Document capture software provides the solution to these problems, with its functionalities dedicated to extracting metadata, identifying and classifying documents.


Share

Tuesday, October 8, 2013

Best practices: Four ways to reduce paper consumption at the office

Many professionals have given up on the war against paper. It’s true that we’ve spent too many years talking about paperless offices as they continue to be an elusive goal. Perhaps the best strategy is to go little by little, changing small things which allow our knowledge workers to carry out their digital work in a progressive manner:


  1. Take the concept of the paperless office to every level. Analyze where the documents are coming from or how the process could be done directly in a digital format. Don’t look at your business as a place (or places) with well-established physical boundaries: think that your business is anywhere where someone is working (such as your sales team or your distribution facility.)

  2. Carry out a conceptual definition of what your principal document types are and the information in them that interests you. What typically happens is that, for one document type (invoices, for example) you’re working with different metadata in each sub-type. With invoices, for example, you’re managing different data with each provider, although, theoretically, those invoices should all contain the same metadata. This makes it much more difficult for document imaging projects and data extraction, and, probably, that’ll reduce the amount of time for confirming information you won’t use after.

  3. Identify key processes where you can start. Don’t start out by trying to do everything at once. In a perfect world, we’d be working with paper documents today and not see one shred of paper in our facilities tomorrow. Start with those processes that give you a hard ROI: the ones where you’ll see tangible results, not just feelings or benefits that are hard to quantify when it comes to risk reduction. Gaining improvements in response time, for example, is a quantifiable benefit which can work wonders selling the idea throughout many departments.

  4. Clear, explicit policies. What should be printed? Up to how many copies? Establishing limits on documents, creating a storage plan (a list of documents that must be stored in paper format and the ones that can be stored digitally, how they’re going to be stored, etc.) These policies should involve the buy-in of the people who are to be in charge of making sure the policies are followed, as well as making all staff aware of the policies.

Share

Monday, October 7, 2013

Analysis of the capture solutions currently available on the market

This summer, we undertook a study about the capture solutions available on the market. We asked the people who knew the most about the topic: businesses dedicated to document imaging and BPOs. Although the poll still hasn’t wrapped up, we’d like to share some interesting data that we’ve found:

  • Only 20% of businesses are totally satisfied with their document imaging software. 
  • Some 75% of businesses would exchange their software for one that was easier to use. 
  • 50% of businesses would change their software for one that was easier to integrate. 
  • A total of 57% have spent money on post-purchase development projects.




The poll still remains open for responses. Once it’s finished, we’ll share the results of the study in a chart. We’d like to extend an invitation to everyone who works in this market and are the ones who know the sector best. Ask us if you want to participate.

Share

Friday, October 4, 2013

Using OAuth Authentication for Athento

OAuth is an authentication protocol for both web and mobile applications. This open-source protocol allows users to use their credentials to log into different applications without needing to create a new account for every application they user.

Facebook, LinkedIn, Dropbox or Google are some of the applications or suppliers which allow users to use this type of authentication.


Thanks to Athento being able to use the OAuth protocol, it’s possible to have a button on Nuxeo (or any other ECM) which allows Nuxeo users to get into Nuxeo by using the information that Athento has stored about them. We’re also working on being able to offer a Dropbox login soon. As of now, the technical infrastructure is already available. Check out our Documentation Center to see the documentation that shows you how to use OAuth to log into Athento.
Discover how a smart document capture process works.download it
Share

Thursday, October 3, 2013

Locating documents in document imaging projects

When we work on document imaging projects in businesses or large organizations, there’s always the chance that we might not find the documents to be digitized in just one geographic location. Location of documents to digitize is crucial to determine the costs of the project and how to distribute human and technological resources.

When documents are spread over several different areas, businesses have to decide if the documents should be processed in one centralized location or in the places where they’re being created. The clearest example: banks. A bank can decide if it’s going to send all of the documents generated in its branches to a coordinating office, or if each branch office will be responsible for processing its own documents, or if the documents (in either paper or electronic format) are to be taken to a processing center. Businesses have a number of options:


  • Distributed capture:Distributed capture” means that the documents are scanned and processed at the point of origin (in the example of the banks, in each branch office), which cuts costs related to transporting documents. It’s also common for distributed capture projects to not require employees whose sole task is to scan documents. These processes don't usually require the technology to separate batches of documents, because the digitization of documents isn’t done all at once, but in an orderly way as the documents themselves are created. Now, depending on the document capture software we choose, this might end up being costly. Let’s think that we’re not talking about a web-based document capture program: we’re talking about a desktop program. That means that we’d have to have a capture program on at least one CPU in each one of the locations where we want to conduct capture. Many capture software providers charge per CPU, which means elevated costs for the project. Web-based capture systems, which are stored on a server (the company’s own, or in the cloud) allow users to use a web connection to access the system from distinct points, and distribute the work from distinct points. It might be necessary to have the software running from various CPUs at the same time, due to volume, but never as many as you would have in the case of desktop-based applications.

  • Centralized capture: Businesses that want to carry out centralized document imaging projects pick one (or a few) geographic points, in which all of the documentation will be processed – all of the documents that have been generated in different work areas (branch offices, offices, etc.) These centralized document processing centers house the software and the people in charge of the digitizing process. Document imaging is done on a massive scale, given the volumes of documents, which means that people who are dedicated to the scanners, the mechanisms to separate batches of documents, validating scanned information, etc., are needed. What’s more, this means that transportation costs have to be figured in if the documentation is sent, in paper format, to the document imaging center, or if a way has to be found to get the documentation to the center in electronic format (scanning documents to a CD, for example). People who defend this method of document imaging argue that it’s more cost-efficient to have and maintain one central processing area (fewer scanners, fewer computers on which to run the software, etc.).


Type Centralized Distributed
Advantages Ideal if we know that all documents will get to one specific location. Documents of high quality since documents follow procedures and best practices conducted by one team of humans. Quick access to documentation. Savings for transportation and labor. Prevents possible losses of information that comes from transporting documents.
Disadvantages Possibility of losing information when transporting documents. Might take a few days to have information available while waiting for documentation to arrive at processing center and be processed. Needs more people to do – runs risk of compromising security of documents. Needs much more flexibility, scalability and control. System needs to guarantee high degree of availability. Quality of documents might not be as high as those processed in a central location.


At any rate, businesses need to evaluate their work process and specific conditions before defining which method of document imaging would suit them best. Additionally, many document imaging processes manage hybrid solutions which combine centralized capture with distributed capture. 


Discover how a smart document capture process works.download it
Share

Tuesday, October 1, 2013

Analysis of JBoss 6.1.0EAP (7.2AS)


JBoss is now using version 6.1.0 EAP (Alpha+), after the release of 7.1.1 AS. 

It’s important to point out that JBoss has carried out several adjustments to its versioning policy since the release of version 7.1.1 AS (community), to the subsequent 6.1.0 EAP (Enterprise). While these changes might be confusing to developers, they’re justified because the Community version was falling far behind the needs of the applications which are currently being developed, and JBoss is very interested in improving its service. According to RedHat, version 6.1.0 EAP is of much higher quality, even though it’s still only in its Alpha version. It’s also worth noting that this version boasts an LGPL license and should coincide with version 7.2.0AS, which will be ignored from here on in. 

In the following paragraphs, we’ll highlight the advantages and disadvantages of this version. Note that this evaluation of good and bad things is based on version 7+ (which coincides with 6.1.0 EAP, with errors fixed.)

Advantages of JBoss 6.1.0 EAP
  • Certified in Java EE 6 (5.1 uses Java EE 5 and 6.0 uses uncertified Java EE6).
  • Starts up to 10 times faster than previous versions. 
  • Improved administration system. New command console. 
  • Uses fewer resources. Manages memory better when applications are opened.  
  • Simpler configuration, both with application configuration and the middleware center.
  • Noteworthy: (OSGI) More modular design. Isolation at the application level for the use of global libraries (Load classes on demand). Much easier deployments.
  • Noteworthy: JBoss Seam 3 + CDI + Weld deployment (currently uses Seam 2).
  • Cool: Includes JSF2 (Currently uses JSF1).
Disadvantages of JBoss  6.1.0 EAP  
  • Performance problems with EJB shared among servers. 
  • Performance problems and use saturation of EJB on the client and on the server side
  • Use of unnecessary global services in deployment, which negatively affects performance, memory and disk space.
Conclusions
  • Migration to version 6.1.0EAP (7.2AS) would generally have a lot more positive points than negative ones for the deployment and its use of libraries and API are much more advanced, counting on fundamentally better JSF + SEAM. 
  • Modularization allows for much simpler deployment to configure, which, up until now, we’ve gained in performance. 
  • Use on the administrative level is a lot easier for systems administrators to control.
  • With regard to use high-capacity EJB use, we should pay attention to its performance, keeping in mind that a lot of bugs have been fixed in version 6.1.0EAP. 

This analysis has been carried out by our amazing Víctor Sánchez (@victors), Head of R&D. Ask any question you’d like on our blog, or via our Twitter feed @athento.

Has this post been useful? Don’t forget to share it with the community 
Discover how a smart document capture process works.download it
Share

AddThis