Thursday, 26 March 2015

Remote Test Execution in SPAN’s AFiS



By Sadhanandhan B


During the last week of February, one of our customers required us to include a feature in our test automation offering for the customer. The customer wanted to execute tests from a remote machine. The customer had a Linux box, which was hosting the Jenkins Continuous Integration.

The challenge was to execute the tests on a Windows box with the required browsers and to call the execution from the Linux box using Jenkins and Apache Maven.

According to Wikipedia, “Jenkins is an open source continuous integration tool written in Java. The project was forked from Hudson after a dispute with Oracle. Jenkins provides continuous integration services for software development. It is a server-based system running in a servlet container such as Apache Tomcat”.

Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.”

The test environment was setup like below:


Figure 1: Test Environment Setup

There was one Linux server, which was running the Jenkins CI server and there were two Windows boxes running the various browsers used for test execution.

All our tests were created within the AFiS framework on Java, using the Page Object Model in Selenium webdriver for test creation.  TestNG was used as the Test Runner and ReportNG was used for captivating and concise reporting.  Apache Maven was used for the project object model for better structure and flexibility for our test projects.  Maven comes in handy when distributing source, as the libraries can be downloaded on individual machines when the tests are run, rather than bundling the library files with the source.  This allowed us to save on uploads and downloads of the sources. Also, all source files were stored in the Subversion server as part of the SCM requirements.
The tests were executed by schedule – to run every day at a specified time, and the results were stored in the Jenkins server.

Code Snippet

Code inside the BeforeSuite or BeforeTest annotations of the webdriver test:

String browserName = System.getProperty(“BrowserName”);

String remoteExecution = System.getProperty(“RunRemotely”);

String remoteMacIP = System.getProperty(“RemoteMacIP”);

String reportMacPort = System.getProperty(“RemoteMacPort”);

if (remoteExecution.equals(“true”) {

if (browserName.equals(“firefox”))

browser = DesiredCapabilities.firefox();

else if (browserName.equals(“chrome”))

browser=DesiredCapabilities.chrome();

else if (browserName.equals(“internet explorer”))

browser=DesiredCapabilities.internetExplorer();

RemoteWebDriver driver = new RemoteWebDriver (new URL(“http://” +remoteHost +remotePort+”/wd/hub”, browser);

} else {

// Use normal settings for firefox, chrome and IE

}

In POM.XML the following will need to be set up:

<systemPropertyVariables>

<BrowserName>firefox</BrowserName>

<RunRemotely>false</RunRemotely>

<RemoteMacIP>localhost</RemoteMacIP>

<RemoteMacPort>4444</RemoteMacPort>

</systemPropertyVariables>

Finally, calling Maven to test:

mvn -DBrowserName=firefox -DRunRemotely=true -DRemoteMacIP=10.10.0.20 -DRemoteMacPort=4444 test

Modus Operandi

The design of the POM file, for use in Maven, was changed to include variables such as name of the browser, remote host name, port number, and whether remote execution was required. These were setup as System Property values in the POM surefire test plugins.

Once the variables were setup as System Property values, maven command line calling was implemented with the various property variables to facilitate remote execution. These System Property values are used in the Selenium webdriver framework to run the tests with RemoteWebdriver with the DesiredCapabilities class. Based on the inputs from the browser name, the DesiredCapabilities class can use either .firefox(), .chrome() or .internetExplorer() for the remote browser. And, the RemoteWebdriver needs the remote host and port to initialize the class.

Conclusion

To conclude, Apache Maven can be used from the command line to trigger System Property values which can then be used inside the Java code in Selenium webdriver tests to run tests on remote machines, which uses RemoteWebDriver and the DesiredCapabilities classes as the base classes to run these tests on remote machines.

Retail Apps using MADP – (Mobile App Development Platform)


By Bhuvana Balaji

Retail mobility has seen a paradigm shift over a decade. From bulky electronic cash registers to mobile Point-of-Sales, consolidated delivery logs to instant proof of delivery, tedious stock audits to automated stock controls, generic offers to contextual services, and so on.

Though reachability to customers was the focus during the onset of retail mobility, its incessant evolution brought in unifying solutions that coherently addresses the needs of every stakeholder in the retail chain. With these solutions, aspects such as Human Resource Management, Finance Management, Supply Chain Management, Sourcing-Buying-Labelling, Marketing, Merchandising and Customer Experience have received radical optimization.

The traditional ERP systems, in addition to the expensive on-premise setup costs were complex and lacked agility and flexibility. The advent of Mobile Enterprise Application Platform (MEAP) brought in means to address current and future mobility needs across businesses and also ways to leverage the existing systems.

MEAP in the retail context, which originally meant to be an Omni-Channel access gateway, addresses the challenges across the mobile application lifecycle from design to deployment. It also amalgamates various components that resolve security breaches, integration barriers, multi-platform support, management of adverse networks, scalability issues and user roles provisioning. MEAP remains predominant in the B2E and B2B spaces.

MEAP offers:
  • Integrated Development Environment
  • Mobile Application Management
  • Mobile Content Management
  • Mobile Device Management
  • Mobile Service Management
The industry saw consumerization of User Interface (UI) and User Experience (UX) crop up during the end of the last decade and a platform branch out as Mobile Consumer Application Platform (MCAP). MCAP lays emphasis on B2C applications.

Eventually, MCAP and MEAP merged into what is prevalent as Mobile Application Development Platform (MADP). MADP offers a gamut of UI and configuration templates to quickly design, develop and deploy apps. It also provides a wide range of integration methods. Few of the key players in this arena are IBM, Kony, SAP, Adobe and Antenna.

With a demanding need to have an elastic middleware server, mobile Backend-as-a-Service (mBaaS) arose. mBaaS provides huge integration capabilities by enabling APIs ranging from mobile, business components, database connectivity and access, infrastructure, external systems like ERPs or CRMs. The integration strategy of the mobile application to mBaaS is left to the discretion of the developer. Appcelerator, Windows Azure Mobile Services, Kinvey are few of the many mBaaS offerings available.

mBaaS being independent of the mobile application exposes only the required services for the mobile applications’ consumption. This limitation of not being able to manage the application at runtime led to a flavor of PaaS, Mobile App Platform-as-a-Service (MAPaaS). MAPaaS, in addition to abstracting the mBaaS, also provides a design-to-deploy platform for mobile applications. MAPaaS manages scalability, dependencies, connectivity, and source code control in addition to providing highly available development-deployment-test environment. PaaS is polyglot and hence, facilitates managing standardized policies and procedures across frameworks.

In my upcoming series of articles, we will skim through the various flavors of platforms using retail case studies as contexts. The first case study presented would be about addressing an in-store challenge of on-shelf-availability (OSA) through a mobile application developed using IBM Worklight (now IBM MobileFirst).

IoT in Banks


By Raghavendra Prasad R

Do you have mobile app, which keeps track of your data usage, estimating your month-end data usage, alerting when you cross your pre-set daily data plan? What if you have a similar mobile app for banking? You bank could have an app, which alerts you about the special offers on your favorite brands, estimating your month-end balance and analyzes your spending patterns. It can provide you with savings, investment suggestions and financial planning suggestions as well!


Banks have been quick in adapting various technology trends. Customers have become more tech savvy and numbers of wearable devices are all the more increasing. In banks, IoT will bring an unimaginable level of data and data-driven customer insights. This will help banks in providing tailored services to customers, extend suggestions and provide latest offers on a daily basis, based on their daily transactions & trends already analyzed by banks.

IoT is a step ahead of what people think and this can be anything one can imagine. An example for IoT - customers stepping into their bank and the manager or a POC for customer relations helping is retrieving their account and latest transaction details, keeping them abuzz with the latest customer trends to help them take quick decisions based on the information available. IoT transactions are analyzed using big data analytics to give customers a tailored service.

Internet population is growing at a greater pace, as IoT is expected to grow to $20 billion devices and 6.59 devices per person in 2020.

However there are few concerns with IoT. Some of them are as stated below:

Privacy: One of the concerns to deal with IoT is that all transactions are collected and stored. So, there is no privacy, for ex: your smart watch, which has a GPS, will send your location details to banks or other companies to ensure organizations know your needs to perform focused sales.

Data Security Risk: Companies collect the tons of information, which includes even the minute details and, having loads of data pertaining to an individual is always prone to hacking. Incorporating latest data security technology to ensure that information is secure from hackers takes the center stage. Latest data security applications like Cloud Access Security, Machine-readable Threat Intelligence, and Big Data Security Analytics have become a must to ensure utmost protection of data from falling into the wrong hands.

Lack of applications: Currently, we are in the process of exploring and exploiting the maximum potential in arena of wearable and non-wearable devices like kiosks in banks. But, in the present situation, there are very few banking and finance applications developed for both the devices, as we are yet to realize the full potential of these smart devices.

IoT is one latest trend, which will change the way mankind is evolving! It will make banking easy, simple and faster than ever before. It will also change the way business is conducted in the future. So, we can confidently state that IoT will be the future of technology trends, especially in banking and finance sectors. IoT is the key factor, which will enable banks to move into Banking of Things.

Thursday, 19 March 2015

WebRTC and Supple Signaling

By Venkatesh D

Here is a topic, which is already discussed in breadths and depths by many WebRTC evangelists and enthusiasts. This blog tries to add a different dimension over many basic principles of WebRTC Signaling.

As we know, the team that introduced WebRTC, kept open the WebRTC Signaling layer free for application developers to choose their convenient and efficient signaling scheme. The focus was only to make the WebRTC core i.e., make the media layer strong and consistent. The reasons for such an architecture being, the challenges that may arise because of stateless web page reloads or a developer preferring to have a custom signaling scheme, which is suitable for the use case in context.

The current W3C version of WebRTC is based on JSEP. Microsoft also has a proposition called as CU-RTC-Web. Each has its own merits and demerits. However, these become the core of WebRTC and, hopefully, all the key players will converge with a common standard to resolve "interoperability" issues and move forward with a common standard platform.

Application Signaling: Is it important?
There are many discussed, proposed and preferred application signaling schemes like SIP over WebSockets, XHR, XMPP, signaling over data channel etc.
This brings us to a forum to evaluate what next could be on the WebRTC signaling layer.

Supple Signaling Layer
Has it benefited the WebRTC application developers? Yes, it certainly has given an open space for many developers to build their proprietary signaling framework/layer, each providing its own merits and demerits.

What does it lead to?
  • Fragmented user base - The end users will be confined to a limited set of key service providers in the future. Though WebRTC is open and free, it may not reach every true end user based on the same philosophy. To make accessibility of WebRTC seamless across user base, some percentage of application signaling should also be standardized in a way that every service provider follows it. This helps the end user to easily switch between service providers seamlessly. 
  • Interoperability challenges - Absence of a standard signaling layer leads to fewer monopolized service providers, and hence, fragmented user bases. Each user base is isolated through the respective WebRTC service provider. Though the media backbone of every user base is the same WebRTC, they will not be able to reach each other across the service provider’s borders because of signaling interoperability challenges between the service providers. The language spoken to carry WebRTC by each WebRTC vendor is different and thus, it is tough to understand the reciprocation between them. This further drives to a possible solution, making it work through federation. 
  • Federation possibilities – As WebRTC service providers evolve and gain a significant subscribers’ share, there will be multiple large pools of customers held by each service provider. Each pool enjoys the services provided by the respective service provider, but, may not proliferate with users belonging to another service provider. Hence, this pushes the service providers to evolve as a federation to expose/exchange their services mutually with other service providers, so that, a user belonging to one WebRTC service provider can also access services from another service provider. 

SIP, on the other hand is an established, well adapted signaling standard for VoIP. Hence, many SIP-based VoIP service providers co-exist with least to nil interoperability issues, making the end users use the services without worrying much about interoperability.

For WebRTC to be in the same state of existence and adoption in future, it needs to have a defined minimum common standard, accepted and followed by all the WebRTC service vendors. This should give complete liberty to the end users to choose the required WebRTC service provider, but, still be able to connect with a user who is subscribed with a different WebRTC service provider. However, backing the principle of developer independence on signaling, there can be extensions on top of the signaling standard that a WebRTC service provider can build. These extensions will be confined to their restricted user base and would not intervene in the basic WebRTC signaling across all the vendors or service providers.

The above roadmap gives a win-win situation to both the worlds - developers/service providers as well as end users. It also helps with quicker adoption and establishes clearer boundaries among signaling standards makers, WebRTC service providers and end consumers. Though fragmented through extensions, the core standard signaling still unites all players, resolving the interoperability issues. As the signaling layer standard receives widespread acceptance and adoption, the issues leading to network traversals, security, performance etc. start to diminish soon. As WebRTC bundles audio, video and data in a session, addition of a common signaling layer standard exposes humongous forms of features, services and applications for the end user.

It is good that the media core, with its signaling across browser makers, is getting stabilized, as driven by W3C. This gives a solid platform on which, a common application signaling of some form can be standardized, just like SIP or SIP itself.

Some more thoughts, which trigger further questions -

What if SIP was considered a defacto application level stack for VoIP on all platforms, just like TCP/IP? It enables every device to be a VoIP terminal, facilitating browsers too to use platform services efficiently. If not platforms, what if SIP was made a part of browsers and still is making use of VP8/9 codecs for video?

The intent of this entire blog is to push for a common WebRTC application signaling standard for voice, video and data over IP across all platforms, devices or browsers, so that, the end user receives maximum benefit out of the latest and happening technologies!

SPAN’s Testing of Dynamic Applications – Automation Test Solutions

By Mahendra Kumar D, Mohit Verma, Sowmya BS, Awadh Narayan, Poornashree A, and Suprith Rao

It has always been a challenge to automate dynamic applications and objects on the screen that changes its properties and behavior in run-time. In this technical paper, we make an effort to compile different solutions and approaches implemented at SPAN across different situations over the years using different Test Automation tools.

Due to the technological advancements and change in the nature of applications, most of the applications that we test are dynamic in nature. In some of the applications, the screen objects are dynamically created based on the business and configuration rules. 

The reason for this dynamism in the applications is due to one of the following reasons:
  • The nature of the application business demands that the web pages/screens or even the objects on the screen to be created dynamically. For instance, in a Content Management System, all the pages that are created are specific to the context. This dynamism poses a challenge for automated testing.
  • Dynamism is imposed by the application frameworks on the objects’ attributes such as title, IDs, etc., which are used by the automation tools and scripts to identify them uniquely. This makes the automation scripts fragile, posing a challenge to automate and maintain the scripts. The dynamic IDs usually are of a specific pattern depending on the framework or application logic. This includes auto-incremented numbers, for example, ext-gen-123, treepicker-1038-inputEl, view: _id1:_id2: content. These numbers can change from session to session or from one application version to another, which creates difficulties for object identification.
Depending on the testing tool, application technology stack, pattern of the dynamic objects or the application and other important aspects of the context, there are multiple solutions that can be implemented to solve this problem. Although handling dynamic objects is a common problem that one would encounter during automated testing, it demands critical thinking with required exploration and technical acumen to understand the problem pattern and devise the solution to tackle it.

SPAN has been tackling these problems over a decade with multiple tools and technologies and has a wide variety of unique solutions to address the problem in context. Outlined below are a few example stories that describe the time-tested approaches implemented at SPAN.

Testing Story 1: Automation testing of web application (KendoUI) with dynamically changing object properties


Context:
A Nordic based legacy HR application migrated from COBOL to .NET web application. SPAN provided a solution by developing a migration tool, which does the code conversation from COBOL to .Net technology. The migrated web application is an ASP.NET MVC application with a KendoUI front-end. Dealing with front-end objects at UI automation was a challenge due to the dynamically changing object properties that was changing between sessions.

Automation testing was implemented with Smartbear TestComplete tool. SPAN has successfully handled this challenge by analyzing the pattern of changes in the object properties and using appropriate, regular expressions and strategies described below.
  • Issue 1: Not able to select values from Kendo drop down – Month, Year drop down
Solution Approach: The KendoUI date selector object was seen as 3 different objects (textbox area, dropdown arrow, dropdown list) by TestComplete. Select methods provided by the tool, such as SelectByValue(String Value), SelectItem(int Item) etc., did not work and there was no support provided by the tool to select the values from the drop down. Selecting values from this drop down was critical for the test case execution.

Based on the exploration and understanding of the KendoUI framework, we devised a logic for selecting the drop down in an indirect way, using keyboard strokes and implemented it aptly to work for the context that resolved the problem.
  • Issue 2: Script failure due to dynamically changing (between sessions) object properties 
Solution Approach: Automation scripts were failing because of the dynamically changing object properties that were changing every time the page was loaded/refreshed. Random numbers were getting appended to the ID property, making our scripts unable to identify the objects on screen.

Using TestComplete Namemaps for storing the object properties would have been a difficult option for the context. We used TestComplete methods - Find(), FindAll() and FindChild() as required by passing the class and object type, creating the automation objects dynamically at runtime based on matching the combination of properties such as inner text/value etc., This way, the created dynamic objects never failed to identify themselves to the automation scripts uniquely, and thus, resolved the object identification problem.

Inference:
In the above context, for Issue 1, there was no tool support for the drop down list selection. Experienced programming skills for the context, solved the problem. In Issue 2, use of alternate functions in TestComplete with pattern matching resolved the object identification problem.

Testing Story 2: Enterprise Automation testing of applications with dynamically changing object properties


Context:
A US based Independent Software Vendor (ISV) sought SPAN’s help for automated testing of their portfolio with about 43 .NET smart client applications. SPAN proposed a framework built by using TestComplete.

Technology used: .Net, Devx Controls and Citrix like environment.

Problem description:
TestComplete tool failed to identify objects and navigate to the required screen for testing as the menu bar and ribbon was considered to one object although it was a group of several objects. For example, we had to take the navigation path QA  QA Testing  Survey Audits to open Audit Survey Report and test. We could not have written multiple lines of codes to navigate to different forms and multiple applications.

Below is the line of code which identifies the above mentioned Survey Audits control in TestComplete. In the code below, there are several dynamic portions that change over time between screens and applications. A few dynamic portions are highlighted.

Sys.Process("CMCLITS.ClientQA").WinFormsObject("frmMain").WinFormsObject("ribbon").SelectedPage.Collection.Item(9).Groups.Item(3).ItemLinks.Item(7).Item.Links.Item(0).Edit.Buttons.Item(1)

Solution:
TestComplete shows the entire application as a single window (inside its object browser tree) without any child windows/controls. SPAN utilized the Microsoft Active Accessibility (MSAA) mechanism and some TestComplete methods such as Find(). FindAll(), FindChild() etc., succeeded in automating all the controls on the above mentioned application. The implementation tackled the dynamic object identification across all the screens and all the applications in the portfolio, making our enterprise automation framework a success.


Testing Story 3: Automation testing of a banking application with dynamically changing objects


Context:

A Norway based client had requirements for automating its electronic signature system used by different customer banks. The application had 2 portals, one for creating signing contracts and sending it to the recipients for signing. The second portal took care of the actual signing procedure.

Creating signed contracts was a stepwise workflow procedure, which was designed in the form of wizards. Each of the wizard pages had dynamic input fields based on (a) the type of recipients that the contract is being sent, (2) the number of recipients and (3) the number of document attachments allowed. Each of the next pages in the wizard was rendered based on the input supplied in previous page. To add to this, the display of controls and the inputs were mandated and configurable by the customer banks. This feature of the application and its nature of behavior introduced dynamism to the objects challenging the automation testing.

Solution Approach: When SPAN team started to look at the requirements, though there seemed a number of problematic areas in the design of test flows, the root was at - How to deal with the dynamic nature of the display controls on the application pages? Different approaches were thought of and suitable ones were applied such as –
  1. Regular expressions (objects) were used to formulate the wizard titles and text verifications 
  2. Display controls were descriptively created using description objects when the control properties were known or could be derived 
  3. Controls were identified as child objects using the parent hierarchy such as browser, page, frame, and WebTables etc.
With the above said different approaches followed, we were able to design different scenarios as listed below into one single test flow.
  1. Create a signing contract order for single signatory with a single document 
  2. Create a signing contract order for single signatory with multiple documents 
  3. Create a signing contract order for multiple signatory with a single document 
  4. Create a signing contract order for multiple signatory with multiple documents
Identified approaches were able to address the problematic areas successfully in the automation test design and produce light weight scripts in a short period of time. We used VBScript and QTP tool to tackle this problem.

Testing Story 4: Use of TestComplete namemaps and ‘NativeWebObject.Find’ to tackle dynamic objects identification


Context:
A US-based leading provider of talent management software for the healthcare industry needed automation of its application testing with TestComplete. The functionality lined up for testing was web pages related to talent management applications. Customer had trouble automating the functionality in the web page as the page title was changing for each page and with frequent application builds. Having all the page titles identified as objects within name mapping in TestComplete were slowing the script execution as well as adding a lot of code conditions within the test script for object identification. Hence, it was increasing script fragility. Added to this were multiple instances of the application that had instance specific text appended with title for every page in the application.

Solution Approach: After analyzing the given scenario, SPAN suggested and then implemented a better way of adding objects into name mapping via regular expression. Having just a few high level page objects into name map, dynamically creating the objects and identifying HTML controls on page using ‘NativeWebObject.Find’ property was possible.

Inference:
The page objects, having different titles/names were added using regular expressions, which drastically reduced -
  • The number of objects that had to be added into name mapping, reducing the script fragility. 
  • The number of lines of code for conditional checks on desired page load in UI, improvising the maintainability. 
  • The script execution time.

Testing Story 5: Automation testing of applications with dynamically changing object properties


Context:
A US based leader, providing HR software solutions for the healthcare industry required automating its application testing with Telerik WebAii. The functionality lined up for testing was web pages related to talent management applications. Customer’s QA team had trouble automating the functionality of the web page, where the controls (such as buttons, links etc.) in the HTML grid was assigned with dynamic attributes. The Test Scripts developed in a build (say X) was failing in another build (say Y) due to changed attributes for controls in grid.

Solution Approach: After analyzing the given scenario, SPAN implemented the solutions to identify the HTML controls using the partial match concepts, which is supported by Telerik WebAii and logic to identify all the input controls (such as Text Boxes, Radio Buttons, Checkboxes, Text Areas etc.) without knowing the control identifiers, store them into a collection and then use it for appropriate actions based on the test case need.

Inference:
With the implemented solution, page controls’ objects identification became more robust, script execution performance increased, and the number of lines of code reduced as the controls identifiers were not stored in the Test Solution.

Testing Story 6: One automation solution for 13 different bank applications


Context:
Our customer, a Nordic software services giant, provides services to multiple banks all across the Nordic region. They approached us to automate the testing of different bank applications in IE, Firefox and Chrome browsers. There were 13 bank applications in total.

Description:
To understand the context, we requested our customer to provide us the access to sample bank applications. During the application tour, we realized that all the bank applications have the same business layer and are different in UI implementation, which can be handled by the automation code. We created a Proof of Concept (POC) for the customer, where we automated a scenario for 6 banks with single QTP script. Customer showed a great interest in the proposed approach and gave a nod to go ahead. SPAN, specialized in automation testing, has a big library of automation framework. We picked one in-house developed QTP hybrid framework to implement the automation. The only concern we had was to deal with the UI objects for 13 different applications in QTP Object Repository. This would slow down the execution and make it tedious to deal with multiple objects for each bank. To overcome this, we introduced Config XML in the existing framework as Object Repository. Then we introduced Driver XML, where the user can mention the bank application to be tested along with browser as per the testing need. As per the implementation, the test script calls the Driver XML and reads the name of application under test, picks the elements from Config XML and executes the test in a browser mentioned in Driver XML. The business functions and general functions are managed separately in the Business Utility Library and the General Utility Library. The solution implemented was handling changing objects dynamically across the banks and also was dealing with the varied functionalities depending on the bank.

Thursday, 12 March 2015

The Rise of Xamarin in Enterprise Mobility

By Shashibhushan Singh

We are living in the age of mobility. Currently, there are more mobile connections than the number of people living on planet Earth. Smartphones are becoming more popular. The GSMA Intelligence report forecasts that by 2020, smartphones will account for two thirds of all mobile connections. Unsurprisingly, there are more smartphones to be seen in all kinds of enterprises. Employees love to use their smart mobile devices, phones and tablets at work as well as outside work. This trend has been noticed by organizations and hence, they are opening up to Bring Your Own Device (BYOD) paradigm. This allows employees to be more productive on the go. Enterprises are looking to securely make in-premise data and legacy applications available on employees' mobile devices. More and more enterprises are paying heed to this trend and are interested in creating proprietary mobile applications for use by their employees.

Because of the inherent multi-platform nature of BYOD and prohibitive cost of writing multiple ports of mobile applications, enterprises need to look for cross platform tools for the development of mobile applications. For any mobile application development platform to be viable for use in any enterprise, it should have the following features:
  • Cross platform development
  • Great looking and highly responsive user interface
  • Security at the code level and for data at rest
  • Ever improving ecosystem of enterprise back-end and cloud connectors
Considering the above factors, Xamarin is a very viable mobile application development platform for enterprises. Xamarin is a cross-platform application development platform. Developers can write applications for iOS, Android and Windows Phone platform using the C# programming language. This approach provides 60-70% of code reuse across the three mobile platforms viz. iOS, Android and Windows Phone. There is an ever improving cross-platform user interface library now available for the Xamarin platform called 'Xamarin.Forms' – a bleeding edge technology from Xamarin. With Xamarin.Forms, 100% code reuse across platforms becomes a possibility. Enterprises can make use of vast amount of the available .Net C# development talent (possibly in-house team working on ASP.Net backend technologies) for the development of mobile application using Xamarin.

Xamarin applications are native applications and because of that, they do not suffer from performance problems visible in HTML5 and JavaScript based cross platform applications. Xamarin C# code compiles into native binaries, making code de-compilation much tougher compared to HTML5 and JavaScript based cross platform options. Enterprises need to make sure that the mobile applications do not leak confidential data. Xamarin includes support for fully functional .Net security stack that can be used by developers in their applications. Third party component like SQLCipher provides support for encryption of data at rest.

A great enterprise and cloud services ecosystem is getting built around Xamarin. More and more enterprises and cloud software providers are showing interest in it. Major software providers like SAP, Salesforce, IBM and Microsoft have built components for Xamarin.

SAP has collaborated with Xamarin to enable enterprise mobility for enterprises running SAP software. Salesforce SDK is available for free on the Xamarin components store. IBM has made available its MobileFirst SDK through the Xamarin component store. Microsoft Azure mobile service connectors are available for Xamarin, making it easier for enterprise mobile applications to store non-sensitive application data in the Azure cloud.

Xamarin is enjoying support from Mobile Backend as a Service (MBaaS) providers as well. MBaaS is the new buzzword in the field of enterprise mobility. MBaaS systems provide mobile optimized cloud backend system and enterprise backend connectors, making development work easier for enterprise mobile application developers. KidoZen is a notable player in the MBaaS market. KidoZen provides private and public cloud based backend for mobile applications. It also provides numerous enterprise backend connectors. KidoZen has made its SDK available on the Xamarin component store, allowing Xamarin-based mobile applications to connect with various backend systems, using very small amount of code.

In my opinion, above mentioned points make Xamarin a good candidate for implementation of a mobile enterprise.

SMART Railways: A Data Intensive Transformation

By Uddeepta Bandyopadhyay

The possibilities to improve lives by using technology and data are enormous. There is an industrial estimate that in 2020, there will be 26 times more connected devices compared to the world population. These solid connections will generate astronomical data to churn, and if analyzed smartly, the amount of actionable intelligence generated, will change the world forever.

What about the railways? Change may seem slower as trains have longer lives than many methods of transportation.



But, for large and growing economies, railways play a role of economic lifeline. Traditionally monitoring and maintenance of tracks and coaches has been labor intensive, and thus, handled through manual inspection and planning. This, on one hand is slow, and on the other, is not efficient and expensive. Commercially, the railway technology market is worth €131bn. A large chunk of this budget is going to be allocated towards creating 'Smart Railways'.

Introducing rail sensors, integrated with predictive fault modelling can make the maintenance much efficient and less expensive. An integrated smart rail system has the potential to change railways in a very short period of time.

According to EURAIL magazine, M2M networks are now being implemented to create a more reliable and robust service using large numbers of high quality, resilient connections. In the UK, under the Disability Discrimination Act, station entrances must provide a Customer Information System (CIS) screen. M2M over 3G is providing a fast and economical alternative to cabled systems.

There are a number of technology-associated challenges to be overcome to establish a smart rail system. Biggest of them is consolidating different systems in place into a single synchronized data engine using big data technologies.

Some of these existing systems are mission critical, huge and have been running for decades. So, an ideal architecture should be a combination of DWH + big data platform + real time Internet of Things architecture.

​The idea is to store each iota of information generated, whether it is a machine log, ticket transaction or rolling stock accounting or drilling down to find insights, which help to improve operations.



That was a simplified version of the task. In reality, a transformation project of this scale requires meticulous planning and years of experience to make it a success. There can be issues like incomplete data, format mismatch, execution speed, data validation, data compliance and many more. Thus, before commencing such a project it is a must for project sponsors to select the right blend of resources to build a project team, technology and process in order to avoid failure.

If successfully implemented, Smart Rail projects can drastically improve transportation. Here are a few examples:
  1. Improving customer experience: With smart use of sensors and camera's and intelligent feedback system, it can be figured out what the commuter likes and does not like. This will help to build future infrastructure
  2. Real time monitoring of the passenger coaches: With sensors, the environment inside the coach can be measured. This may help the authorities to offer a comfortable atmosphere by initiating timely repair, control or maintenance.
  3. Attention to inspection & analysis: In a huge railway network, to know which stretch of the track requires immediate attention or which coach might breakdown, depends on the manual inspection, thus causing errors and losses due to breakdown. An automated sensor-based analysis can help to prioritize the maintenance schedule and assist the railroad systems to become safer and efficient.
  4. Capacity planning: Consolidated ticket data / unavailability of data correlated with track and coach maintenance statistics can help railways plan profitable routes and optimization of resources. This will pave a new way for capacity planning.
  5. Security: Cameras in coaches may also act as a real-time security system, which helps track any mishap, fire or crime and help to initiate immediate action.​
​There are many applications similar to this, which railways are working on globally. We are hopeful that in the coming years, we should be able to see the higher benefits in the railway system worldwide.

Analytics for Mainframes - A New and Unfamiliar Occurrence

By Ajay N.R.

In today's digital world, organizat​ions are heavily investing funds on the research of Social Media Analytics. This could be due to social media applications easily generate 10+ terabytes of data every day. The outcome of this data analysis plays a vital role in taking key business decisions. To summarize - any organization that can bring out intelligence out of raw data gets the edge over other competitors.


Interesting Points to Ponder

Although not authenticated, talk of the town is Mainframe servers that hold world's 70%-80% of the data. This may be because Mainframes are extensively used in Retail, Insurance and Banking & Finance domains. Further, CICS handles 30 billion transactions per day.

(Source: CICS Transaction Server Application Architecture, IBM Redbook)

Clearly, when it comes to data, Mainframe is one of the key players. But, why is Analytics an unexplored area when it comes to Mainframes? This blog explores the possible areas for applying business analytics for Mainframe Applications.

Use of Analytics for Mainframes

Most of the data on Mainframe is stored in a structured format – rational database and file system. Organizations would have already invested in building intelligence out of this data. These could be applications engineered in-house or commercial products licensed from several vendors. Having said this, Mainframes also generate vast amount of unstructured data in the following areas:
  • Application Log
  • System Log
  • User Log
This data may contain valuable information that can provide insights into application behavior. 

Core Insights​



Players in the Industry

zDoop – The commercial version of Hadoop is specifically designed to be deployed on Mainframe. This means that data on Mainframe will continue to remain on a Legacy system, but allows us to perform various BI activities. This is very critical because data security is one of the key features of Mainframe and any compromise in this area will not accepted by the Mainframe application owner. Another feature provided by zDoop is vStorm Connect. This provides a graphical representation of the source and target location wherein users can drag-and-drop the data from zOS to Hadoop. The tool also takes care of the character conversion during the data movement (Example: EBCIDIC conversion) and handles the data movement of:
  • DB2
  • VSAM
  • QSAM
  • SMF, RMF
  • Log files
Architecture for zDoop

















Conclusion


By analyzing, the log files can be very beneficial to business. This technique may not help in growing the business, but will certainly help in identifying and isolating the problems in the current application. In Mainframes, where run-time costs are critical, this technique can help in the saving costs due to application failures. The equation "$ saved = $ earned" certainly holds good in this situation.

Insurance to Cover Hacking

By Arvind M​

Hacking is the process of exploiting vulnerabilities to gain unauthorized access to systems or resources. Hackers steal confidential data like income details, credit card details and social security numbers for identity fraud or theft. Hackers, in most of the cases, target the organizations that store the above mentioned details of various individuals. In some cases, hackers do look for passwords to access confidential online transactions. The most recent and high profile hacking involves accessing the confidential records of millions of customers of Anthem Inc., which is the second largest health insurer in the U.S. Even technology companies such as Microsoft, Apple and Facebook have been the victims of hackers. One of the recent and largest known attacks was against Sony.

In a survey of 800 members that happened last year, the National Small Business Association reported that almost half had experienced security breaches from external sources, with nearly 60 percent of those incidents resulting in business interruption. The average recovery cost of the attacks approached to $8,700.

Insurance companies have come up with products that provide coverage against the theft money or the loss incurred due to the unauthorized use of a computer. Hack Insurance covers the financial loss incurred due to the loss of confidential information regardless of how it might be lost or stolen. The coverage typically covers both first-party and third-party losses suffered.

This can be explained in two parts:
The insurance covers the liability arising from the loss, like the lawsuits filed by individual victims or from business partners that incurred losses because of the hackers attack.

The insurance covers the organization's own costs to notify and monitor the credit of the victims, perform investigation and handle the public relations campaign.

Analysts predict that the total annual premiums of hack insurance products are expected to grow up to billions of dollars by the end of year 2020, which shall reflect the increasing popularity of e-commerce.

Some tailor made business owner insurance policies pay computer security losses under the act-of-vandalism or loss-of-business clauses. However, there do exist some policies, which cater specifically to cover the loss due to attack from hackers. Such policies for large organizations ideally carry yearly premiums that range from $100,000 to $3 million.

Insurance companies have to be more innovative in when it comes to risk measurement part as, currently, there are no effective risk measuring techniques available.

Thursday, 5 March 2015

A Prevalent M2M Gateway in Every Home - Smartness to Follow!

By Alok Raj

Machine to Machine (M2M) and/or Internet of Things (IoT) is the buzz word these days. First, language was the barrier to communicate among humans, then came telephones, followed by PCs with internet, then, wireless phones, followed by smart phones. Slowly, communication among “things” has become more important. The idea is old, but has grabbed attention these days because of low powered communication protocols such as Zigbee/Bluetooth-4.0/6lowPan and Eclipse supported MQTT/CoAP and OMA-DM protocols, finding some concrete space within the memory of “things”.

The “things” above means devices – devices that are either dumb and/or smart. The smart devices communicate with a remote server, where data once arrived would get processed and deterministic information is derived. Things, which are not smart (e.g. sensors) need a proxy called gateway, to communicate with the server.

One good application of M2M is home automation, where the intention is to read or control things at home remotely; there are two approaches – everything at home can have a Wi-Fi transmitter/receiver, plus a control board (read microcontroller). This means, every electrical point (bulbs, geyser, AC, TV etc.) should have a Wi-Fi unit attached and always be connected to the internet.

Second approach would be, the devices communicating over low powered protocols to a local gateway and the gateway is smartly connected to the internet. In this case, the things need to have low powered protocol(s) transceiver and a control module. So the question is - what could be this “gateway”, with minimum investment?

There are three options:
  1. Smart phones – Almost everywhere, but once the user moves out of home, things at home need a gateway, which is always switched ON and connected. So, Smart phones could be a good gateway candidate, but mainly on the move. 
  2. Set Top Box – Can be found almost in every household. Except in a few countries, the world is going digital and QoS have improved because of Digital Broadcast. 
  3. The Wi-Fi router – NOT found everywhere. Not everyone can afford it as well!
Option 2 looks more viable with ipv4/ipv6 stack already integrated with the set top box software architecture. Also, currently available set top boxes have off chip DRAM size, equaling more than 2 GB DDR3 (remember, for decoding HD quality video/audio (MPEG-2, AAC, H.264 etc. we need space)), sufficient space to run something extra!

With most of the IoT platforms implemented in JAVA, running JVM on a set top box should not be a problem; in particular, the open M2M platforms very much mandate to implement various IoT service capabilities (which can be managed remotely) in an OSGI framework. This framework is mounted upon a JVM, and if we go back to history, the very original idea of OSGI framework was to manage smart appliances and other internet-enabled devices at homes, remotely, for e.g., a restricted environment like a set top box.

The other important aspect is the low powered MAN (M2M area network – includes IEEE 802.15.1 [i.3], Zigbee, Bluetooth, IETF ROLL, ISA100.11a, etc. or PLC, M-BUS, Wireless M-BUS and KNX) integration with Set top box (STB) platform. In the same timeframe of IoT becoming popular, key semiconductor vendors had already integrated, e.g. Bluetooth (including Bluetooth Smart) to their reference STB platforms.

Few of the vendors have even System on Chip (SoC) integration of low powered RF protocol such as Zigbee and RF4CE. The integration of these two protocols was done to replace the “the always lost” IR (infrared) TV remote at home, and also to get rid of “always in line of sight” constraint of the age old device.

Bluetooth (BT) (2.0/4.0) is best suited for monitoring health and environment at home and few profiles of Zigbee. For instance, Zigbee PRO is the best suited profile for home automation.

Moreover, the current Zigbee protocols also support beacon and non–beacon enabled networks. This means, the set top box at home would also help in locating misplaced mobiles/key rings etc. within a smart home.

One key aspect of M2M is the data representation, as the volume of data would grow exponentially as every “thing” would try to become SMART. European Telecommunications Standards Institute (ETSI) mandates that every “thing” and the service capabilities of M2M platform should be REST (Representational State Transfer), a renowned architecture style) based. Hence, REST representation matters. This serialized representation would be sent to and from internet over satellite (from and to set top box, respectively) as the IP is already integrated with the STB platform.

Set top box being statically located (receiving information from geosynchronous satellite), unlike a smart phone, can be easily tracked, if at all someone tries to hack your data, can be seized easily, plus scrambling/descrambling methods used for an mpeg transport stream could be handy in M2M cases too.

This infers that a Set Top Box could be the best fit to run an ETSI compliant gateway among all the affordable and viable options (as cited above) within the context of a “SMART HOME”.

Set top box vendors are creating new roadmap for next generation service deployments (Home security, energy/health monitoring, environmental system at home and definitely the entertainment system). Hence, it is most preferable for set top box vendors/operators, who would and should be launching the smart home product, and not the TV makers or devices’/sensors’ manufacturers. Icing on the cake is the Smart City Infrastructure. Smart City is inclusive of smart homes other than smart public services and any infrastructure outside home. With each home already having a set top box under the TV unit, which is going to play the role of an M2M Gateway, the dream of having smart cities is not far behind; it is a reality of the near future.

Get Ready for the Integrated Mortgage Disclosures

By Manjeshamurthy Y.S.

The mortgage market is the single largest market for consumer financial products and services in the United States. During the last decade, the market has gone through an unprecedented cycle of expansion and contraction. It was fueled in part by the securitization of mortgages and creation of increasingly sophisticated derivative products designed to mitigate risks to the investors. During the subprime crisis, property values reduced drastically, many people lost jobs due to the economic condition and several of them left their underwater mortgage for foreclosure.


CFPB Initiative

As a step to consolidate the reasons for that situation, Consumer Financial Protection Bureau (CFPB) – which was created by Dodd-Frank Wall Street Reform and Consumer Protection Act – conducted a survey. Based on this survey, experts found that one of the reasons for that situation was - many borrowers availed loan with “no” or “less” understanding of their loan terms. As important information can be buried in the fine print of the current mortgage disclosure forms, consumers can have difficulty in understanding the true terms of their deal. With the current forms, consumers can have a hard time comparing their original loan terms with their final loan offer. Consumers need to be reasonably sure that the mortgage they signed up for is the one they are getting.

 As a measure to improve consumer understanding, CFPB issued final rules on November 20, 2013 that amended existing requirements for mortgage disclosures. Specifically, the rules amend components of the Truth in Lending Act (TILA) and the Real Estate Settlement Procedures Act (RESPA) that have been in effect for more than 30 years. Existing rules require lenders and settlement agents to give consumers who apply for and obtain mortgage loan two sets (Good Faith Estimate and Truth in Lending Disclosure) of different, but overlapping disclosure forms describing the loan terms and costs. As the CFPB observed, “this duplication has long been recognized as inefficient and confusing for both, the consumers and the industry.” The new rules fulfill a Dodd-Frank Act requirement to address this duplication by combining the two sets of disclosures that consumers receive under TILA and RESPA in connection with applying for and closing on mortgage loan. The resulting disclosure forms under the new rules replace the current forms:
  • The new Loan Estimate (LE) replaces the existing Good Faith Estimate (GFE) and the initial TILA disclosures 
  • The new closing disclosure replaces the HUD-1 and final TILA disclosure 
This rule will be effective from August 1, 2015, and for all loans originated post this date, lenders will have to issue these new forms. To comply with this rule, all lenders are making changes in their Loan Origination Systems (creating or modifying forms) to generate new forms.

Cloudsourcing – Bring Cirrus down

By Pradeep Pavaluru

Cirrus is a type of cloud found high above in the troposphere. In IT, since the cloud technology providers are growing at a fast rate, there is a need to introduce a complete ecosystem in a cost-effective and secure way to all the businesses. It will help SMBs, ISVs, large organizations and enterprises to gradually consider their businesses to align to this futuristic technology. You might have heard or talked about a few of the terminologies like outsourcing, crowdsourcing, etc.

Cloudsourcing is one such technology that is making a shift in the way IT organizations work, manage, create and communicate. Organizations are now looking for a cool, cost-effective way to achieve their goals, as Cloudsourcing has the potential to power their processes.


What is Cloudsourcing?

Cloudsourcing is a technique or process wherein the organizations pay the vendors or providers to manage or provide services to their IT ecosystems from the cloud space, which reduces maintenance and deployment efforts in a cost effective manner.

With big data, Internet of Things (IoT) and cloud being the latest trends in technology, there is an ever rising demand for higher storage capacity and faster computation speed. As data is flooding and bursting the capacities of their on-premises set-ups, all the players in the IT and the non-IT fields want to move to the cloud. When all the businesses opt for moving to the cloud, then the following factors will improve and offer plenty of benefits for you and your organization such as:

Financial:
  • Lower people and power costs – reduces the overall workforce cost and power usage cost by your servers. 
  • No capital costs – no need to invest huge amount on procuring servers.
Accessibility:

  •  High bandwidth – based on the requirements, switching to higher bandwidth is easier.
  •  Disaster Recovery (DR) plans – the outsourcing vendors will have all the mitigation and DR plans for your systems to run 24x7.
  • Work from anywhere, anytime and on any device – you can always stay connected.
Expansion:
  • On-demand computing for agility and flexibility – you can increase your computing power, storage and capacity whenever required by your business and as per your request.
  • Pay-per-use – you pay only for what you use.
  • Focus only on business – you can focus on your business goals rather than on IT.
Security & Compliance:
  • Same security capability like on-premises – you do not have to worry about the security, as there are quite a few services available for providing advanced security for your systems and business. 
  • High performance and anytime support – your systems’ overall performance will be improved and anytime support will be provided to address your issues.
Business:
  • Easier management – uncomplicated management and hassle-free decision-making. 
  • More business and go-green factor – with less overhead on IT, you can invest more on building your organization’s value and client base as well as contribute to a greener work place.
Some well-known companies have already moved into this space. One such example is Netflix, a movie streaming giant. It has benefited in a lot of ways, especially on the scalability aspect. Netflix has a huge customer base and was unable conduct business in the traditional data center way. That is when they looked into cloud as one of their game changing technology.

Future Trends: 

For the next few years, the focus is clearly on the cloud space. All organizations – small, medium and large – will have to create plans to model and execute. I am sure that the vendors supporting the cloudsourcing technology will also increase, each with a unique competitive edge. The big players have already invested heavily to support the growing businesses and trends in this space. 

SPAN is also eyeing for a piece of this cloud-war and has started building competencies in this field. Let us put our thoughts and focus on it for a brighter and cloud-enabled world.