Thursday, 26 February 2015

Missing Links in Loyalty Programs

By Habeeb Ur Rahman

A successful loyalty program goes much deeper than giving a discount or having great product. For companies, which excel in customer loyalty, it is all about knowing how to create an unremitting series of positive customer experiences that cultivate into long term customer engagement and profitability.

The rapid explosion of customer loyalty programs is a clear indication that retaining customers is the highest priority for a retailer. According to the research from International Institute of Analytics, in the USA, the enrollment in Loyalty membership programs has topped ~2.6 billion. But, the proliferation of loyalty programs and customer willingness to join them may be diluting their intended effect, where a typical household is having 21.9 loyalty program memberships, with an average person belonging to 7.4 unique loyalty programs (Maritz study, 2013). Over 84% of the retail organizations agree that their loyalty programs are “NOT” highly effective.


Key challenges in building a successful Loyalty Program:
  • Measuring effectiveness of the Loyalty Program: Loyalty programs call for large yearly investments. However, when it comes to determining the financial impact, most of the retailers do not have a mechanism to discover the effectiveness of the program as, increased spend on loyalty programs do not necessarily translate into proportional growth rate. 
  • Offering rewards that customers value: One of the major reasons that loyalty programs are failing today is because customers do not see any perceived benefits from the programs. Companies are disengaged in understanding customer needs & preferences and failing to tailor these in their loyalty programs. 
  • Cross- platform integration across all channels of customer contact points: The advent of Omni-Channel retailing has empowered a customer to purchase goods from diverse channels. This requires to be coupled with the different ways and means customers interact on social media, and other digital platforms. It becomes even more difficult for companies to coordinate the loyalty programs across all the channels and provide a seamless consumer experience. 
  • Uniqueness of the program: Majority of the loyalty programs are focused on member points or rewards and the highest benefit given to the customers is either exclusive sales or discounts. In comparison with the loyalty programs, there are no distinct differentiators that offer a distinct advantage over other programs.

These findings on the missing links in the current loyalty program management across numerous companies call for the need, the need of having a full-fledged configurable loyalty framework. This framework should be such that, it will help plug these gaps in the current loyalty program management system. It shall be the ‘one of a kind change’ that would make the entire program “extremely effective” to demonstrate a greater commitment to the program and engage more deeply with customers’ needs and preferences.

The loyalty framework model will be an add-on integrator to an existing loyalty management application so that companies retain their existing infrastructure and do not have to reinvent the wheel. It will help companies Define, Listen, Measure and Adapt a better approach to enhance their loyalty program.

Characteristics of the framework:
  • Configurable Rule engine: A work flow based architecture where events /rules can be customized /defined, and transaction & actions can be measured. This will help in defining and measuring the effectiveness of the program and to check for any gaps or change the strategy being followed. It allows companies to tweak the program mid-way with the help of configuration user interface. 
  • Adaptors for data capture: An interface architecture to collect data from all the customer contact points, i.e. Store POS, e & m commerce, CRM systems, social media and other digital platforms. This enables the retailers to listen to customers’ requirements and preferences, which will help build the differentiators of their loyalty program to bring out the uniqueness in their programs. 
  • Analytics engine: It facilitates retailers to capture and track customers’ buying behavior to provide a comprehensive insight on the campaign performance, develop one-to-one relationship with loyal customers through unique and customized offers. 
  • Business intelligence dashboard: To analyze and predict customer behavior, and to derive actionable insights on the loyalty program by using customer experience analysis, market basket analysis, social media analytics, text analytics and sentiment analytics are quintessential. 
  • Configuration User Interface: Based on the insights received from the analytics, trends and dashboard performance, companies can configure new events, define the criteria and assign actions at any given point of the campaign cycle to make the entire loyalty program robust and flexible. 
With these findings on missing links and ways to plug the gaps into loyalty programs, it will assist companies to build a strong brand affinity and ensure higher customer satisfaction.

Wednesday, 25 February 2015

Microsoft Directory Services


By Harmandeep Saggu

Directory services provide a centralized method to store, manage, organize and access information. Microsoft (MS) offers Active Directory as its directory service, which is built upon established standards. Active Directory uses several standardized protocols like LDAP, Kerberos and DNS – LDAP protocol to store and access information, Kerberos to provide secure authentication services, and DNS to provide active directory naming and locating services. As Active Directory is built upon established standards, it is interoperable with other vendors' directory service solutions.

Over the past few years, Active Directory has been widely adopted to host an organization's directories and structures, and to store users, groups, shares, network objects, etc. Active Directory also acts as a central information store for various other solutions like MS Exchange, DFS and SCCM. Apart from this, Active Directory also provides security services using an open encryption standard called Public Key Infrastructure and a proprietary policy based solution called Group Policy Objects.


Microsoft Active Directory is designed to be extensible and scalable; it can potentially store millions of objects. It is based on multi-master replication model. This model allows several servers to act as peers and provide redundancy and high availability, while maintaining the same information using replication. Along with replication, the multi-master model facilitates Active Directory to scale-out geographically.

With scalability comes complexity. A successful and functional scalable solution requires a well-planned strategic design in accordance with an organization's requirements and in-place infrastructure. As Active Directory service forms a central store of information and authentication in an organization, it requires a flexible monitoring set-up. Along with monitoring, we require a cost-effective standby disaster recovery and backup solution to ensure minimal downtime during uncertain outages.

Active Directory is updated with every release of Windows Server. With the latest release of Windows Server, Active directory provides new nifty features like:
  • Single-Sign-On (SSO) solution, which permits the usage of a single identity over a wide range of services across the enterprise
  • Improved Federation services, along with claims, using multi-factor authorization mechanism, enhances authorization controls by adding a mandatory layer of security
  • DNS security extensions support to provide validated referrals and answers to Windows clients​
Starting with Windows Server 2012, now, Microsoft also offers Active Directory on its public cloud service – Azure. This provides organizations a globally hosted and 24/7 available MS directory service using private tunnel over public network.

No doubt that a centralized directory database and access system is indeed necessary for every organization to store, manage and reflect its structure objects from a unified namespace. Windows Server 2012 R2 directory service expands the feature set of the domain and federation services. All these new and inherent features can help organizations leverage a secure, centralized, manageable and readily accessible directory service in a cost-effective package with substantial savings.​

Friday, 20 February 2015

Quick Introduction to Docker (Part 2)

By Ilanchezhian Ganesamurthy

In part 1, we discussed - Why Docker? In this part, we will understand what is Docker, and when and where to use it, along with other additional information about it.

What is Docker?

Docker is an open-source project and it is a light weight virtualized environment for use in portable and distributed applications. To elaborate more, it is a light weight container environment running on Linux Container (LXC) technology. It helps to create portable deployment across machines, assists to build, deploy and run your application within a container.

Docker provides isolation and security, which allow running many containers on the same host in an isolated environment. It provides the right way to run your application securely and completely, isolated from other applications in the container. Docker runs without extra load on the hypervisor, which makes it very fast and economical.

Docker helps to separate your application from the infrastructure (OS, third party library, middleware, and database) and treat infrastructure as a managed application. Docker Inc. is behind the development of the open source Docker platform.

When can Docker be used?


Docker helps in packaging an application, its dependent third party libraries, middleware, webserver and database into one container. Just as shipping container contains all the shippable goods in one container, in similar fashion, Docker container contains all the application and required external dependencies in one single container. This helps to accomplish portability across machines and achieve objectives as:
  • Run Docker container anywhere in a local datacenter or on cloud. Frictionless move of workload (container) to any infrastructure. Distribute and ship those containers to further development and testing. 
  • The same container utilized on-premise can be moved to Microsoft Azure and later moved to Amazon Web Service (AWS) and other cloud supporting Docker container. Cloud interoperability should be achieved very easily.
  • It can automatically build the container from the source code and dependent infrastructure.
  • Docker helps to maximize resource utilization of the hardware. Compared to 2 or 3 VMs, which can be executed in parallel in a development machine, 10 Docker containers can be executed parallel. In a high end server, more than 100 containers can be executed parallelly without much of a performance problem.
  • Eliminate inconsistency existing between different environments (Dev, Integration, QA, and Production). This helps to minimize bugs due to different configurations existing in different environments.
  • Improve software quality and accelerate software delivery at reduced costs.
Docker is appreciated for:
  • It consumes low CPU / memory overhead when compare to VM.
  • It is extremely fast and inexpensive. It is capable of fast boot / shutdown container. VM takes more minutes to boot (in few instance it takes more time based on the software installed on it), whereas Docker takes only few seconds to boot up. This is extremely useful for distribute computing where you need to start, run, kill in short order.
  • Faster delivery of your application. It deploys code very fast and rapidly reduces shipping pain. It reduces most of the pain point related to committing final tested code in repo and its execution in production. 
  • Docker helps to take two containers and see the difference between them. This helps to version control your container.
  • It versions the container. By doing this, user can identify difference between version, commit new version and rollback and so on. It helps to simulate most of the GIT features. This is very huge advantage when compare to VM.
  • Docker is content agnostic and infrastructure agnostic. It can use to deploy Ruby, Java, Python, PHP and many more language code. It can contain any Linux distros; Database (Oracle / MySQL / PostgreSQL); Webserver / Middleware (Tomcat / Nginx / JBoss / WebLogic) and so on.
Docker strives to accomplish “Build, Ship and Run Any Application, Anywhere” seamlessly and very cost effectively:
  • Build: package your application in a container
  • Ship: move that container from a machine to another
  • Run: execute that container (i.e. your application)
  • Any application: anything that runs on Linux
  • Anywhere: local VM, cloud instance, bare metal
Where can Docker be used?

Organizations are firing in all cylinders to be agile and nimble in today’s competitive environment. Nowadays it is becoming a norm for new age companies to make frequent releases of their existing product. Gone are those days where companies used to make 12 releases per year. Currently, new age companies make 12 releases per day. It is not uncommon to see companies make 30+ releases a day. Organization embraces all process, methodologies, and technologies to realize the moto to be nimble and agile. Docker will play critical role in Continuous Delivery / Deployment of products. Through its containerization technology, it amplifies the software release cycle. This helps organizations to experiment very quickly with their product, prompt customer feedback and based on the feedback, improve, ship and deploy very swiftly (on a daily basis, instead of weekly or monthly release)

In addition to many uses cases, DevOps is one important use case, where Docker makes a huge difference. It helps to address most of the issues faced in DevOps successfully. Through containers, we can capture the entire state of the application and required dependencies easily. This makes the deployment process more efficient, consistent and repeatable across all the environments.

Conclusion

In the past, Agile and Lean methodologies have tremendously impacted software delivery in a positive way. Today, DevOps is playing a significant role. They help organizations and developers to be more efficient and effective in serving customers. From the technology perspective, the container technology (specifically Docker) will play a critical role in the future of software delivery pipeline. They will enable an organization to “accelerate innovation at the speed of business change”.

If you wish to learn more about Docker, you can visit the following sites:

Docker tutorial and documentation from Docker website - https://docs.docker.com

https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-getting-started

http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-1-an-introduction

Quick Introduction to Docker (Part 1)

By Ilanchezhian Ganesamurthy

 
Once in a while, a game-changing technology emerges in the radar, which completely changes the way we communicate, sell, innovate and develop software and much more. Technologies like Linux, Cloud, Big Data, Mobile, Java (write-once, run-anywhere) are few of those, fall under this category. They have provided immense benefit not only to the technology community, but also to the whole human society.

Recently, a new kid in the block has emerged that belongs to the same category. Many consider it as the next big thing in the software delivery pipeline. This technology is known as “Docker”It is gaining significant momentum and it is in high demand in the open source and DevOps worlds. Few believe that it is “one of the fastest-growing open source projects in history”.

In this series of blogs, we will explore Docker and analyze the five important Ws - Why, What, When, Who and Where? This will help us understand why it is disrupting the market and why we need to pay attention to it. Let us explore reasons that create much interest for many industry veterans about Docker.

But, before that, let us get to know what is Docker? To put it simply, it is a container and enables virtualization without using a virtual machine. It is based on the Linux Container (LXC) technology, and promises “Build Once, Configure Once and Run anywhere” functionalities.

Before we deep dive into Docker, here are some facts about Docker usages. This will help in understanding the impact it is creating quietly:
  • There were only 500k+ Docker downloads between January and December 2014. It has now increased to 102.5 million downloads, a staggering increase of 800 %. 
  • In 2014 alone there was a tremendous increase in use of (around 1,200 % growth) of Dockerized apps. There have been roughly 71,000+ Dockerized apps recorded as of Dec 2014. 
  • Again, as of Dec 2014, there were more than 49.500+ projects using Docker in GitHub, an increase of 2,200 % in 2014 alone.
Docker: Quick Facts
  • According to the TechCrunch poll, Docker is one of the candidates for the Best Enterprise Startup of 2014. Few other contenders are GitHub, OpenDNS. 
  • More than 175+ Fortune 500 companies are using Docker in their IT environment. 
  • Most of the tech start-ups and fast growing technology companies are building upon their Docker capabilities. 
  • Docker is supported by famous cloud players like AWS, Google, and MS Azure and so on.
Infographic depicting Docker Growth
(Courtsey: Docker Inc.)




Why is Docker so popular?

The above stated numbers are surely impressive! Now, let us understand why it creates so much traction in the tech community and is mentioned in a number of tech forums by many reputed people in the open source world.

In today’s advanced world, a lot of technologies have emerged, but, enterprises still face significant challenges in their software development and delivery pipeline. This severely hampers an organization’s ability to be responsive. Few prominent challenges are:
  • Difficult to maintain a standard development environment. Each developer machine will have a different version of the third party dependent libraries. Code working in one developer machine may not work properly in the other.
  • Moving code from one environment to another should be planned properly due to a lot of complexity associated with it.
  • Difficult to move code from one environment to another (Ex: Dev to QA; QA to Staging; Staging to Production and so on).
Due to this, we often hear conversations like, “Code works in production, but not in development”; “Bug in production, but works perfectly in Dev”; “Hard to simulate production bug in QA or Dev environment”.

One of the important problems faced in DevOps is configuration mismatch between different environments: Dev, QA, Staging and Production may be installed with different versions of OS, software and dependent libraries. As a result, the code which works in Dev, may not work in QA or in production. Bugs, which are noticed in production, are not observed in QA or dev.

Virtualization helps to address few of the issues mentioned above. Until container existed, Virtual Machine (VM) was the only option to achieve virtualization. They are based on hypervisor, emulate virtual hardware and are of very heavy weight. It needs Guest OS and greater software dependency, which increases the size. VM may easily have a size of 5 to 10 GB, depending upon the OS and application dependencies. Due to VM architecture and its huge size, it occupies large memory. In many instances, it also consumes a long time to load.

As Docker is based on the container technology, it achieves virtualization in a lightweight manner. For people who worked on VM, Docker can be considered as lightweight VM. It gives all the benefits of VM, but at a fraction of high overhead. It has less overhead and better performance when compared to traditional VM. The Docker container does not require a separate Operating System as compared to the traditional VM. Instead, it relies on the host OS kernel's functionality and uses resource isolation. Single host (based on host configuration and VM size) can handle 2 to 5 VM instance, running successfully. Adding more VM to it will degrade the performance severely. But, with Docker, it is very easy to run 15+ containers at a time on the development configuration machine. In server configuration, more than 75 containers are executed parallelly without much of performance degradation. This is possible as Docker uses host OS kernel and does not run any Guest OS by itself.

Irrespective of whether or not you use Docker, the containerization is going to play a critical role in the future of software delivery. Recently, Google (one of Google’s senior engineer) has revealed that Google runs all its software in containers and runs approximately 2 billion containers per week. Yes, you read it correctly, 2 billion per week! Imagine how much of software it runs in a container per second; it works around 3000 containers per second!

This information must inspire you to know and learn about Docker. In part-2, let us dive further and understand more about Docker, and when and where to use it and, so on.

Thursday, 19 February 2015

Mainframe Communicates with the Outside World

By Ajay N.R.

In an age where today’s technology becomes history in a short span of time, it is interesting as to how Mainframe Technology has survived until now.  IBM not only continues to support legacy machines, but also continues to research on these machines. An example for this research is the launch of “zBC12” mainframe server.  

Current Trend

The days of mainframes being considered as completely isolated systems are long gone. In today’s world, a mainframe application is constantly being integrated with the distributed application.  The current trend suggests that CICS is a good candidate for integration. As a mainframe developer, good knowledge on CICS, especially from the Webservice point of view is gradually turning into a mandatory skill.

Having said this, the question arises as to why integration is the preferred approach when there are many other ways to modernize the Legacy Application?

To answer this, consider a simple scenario –You are running a Logistics application on mainframe to track the shipments. You have a character based CICS online application. This application has been running since 10+ years and a lot of code enhancements have been implemented. In short, the existing code has evolved into robust programs containing the core business logic. Currently, all the customers call in the Voice support to track their shipments.

 

What risk can we foresee from this scenario?

All though there is no flaw in the system, but with the market moving towards real-time system, customers may intend to have access rights to the shipment details from various devices, which may include cell phone, Web based application and other hand held devices.  If this is not provided, there may be a risk of falling behind in the race when compared to other similar competitors.
To avoid the risk, the common answer would be – “Let us eliminate the Mainframe and host our application on a Distributed platform”.  Easier said than done – Is this a feasible approach? Perhaps, now is a good time to have a look at the potential complication associated with the mainframe codes:
  • Lack of Documentation – Since the code is evolved, documentation might be missing for many of the critical business rules.
  • Lack of Developers – Developers who incorporated the modification to the code may not be present to provide proper guidance.
  • Lack of Coding Standards – Code may be developed without proper coding standards. Below stated factors add to the complexity:

o   Improper field naming convention
o   Unreachable to code
o   Redundant logic
o   Inappropriate commenting standards

Hence, in order to Migrate/Re-engineer the mainframe application - Time, Cost and Risk factors  are very high.  If your mainframe application consumes more than 10,000 MIPS, then to generate  the estimate of time and cost for migration itself is a humongous task.  There are many cases reported where in the Migration/Re-engineering projects have been abandoned because of projects have crossed the maximum budget lines with diminutive progress on migration effort.

As a mainframe owner, how can you modernize the Mainframe Application without leaving out the mainframes? – Mainframe Integration.

The trick in Mainframe Integration is that the business rules associated with mainframe are published as a service with CICS acting as a Webservice provider - SOA. Code will undergo minimal changes only regarding how the data is received into and, released out of the Mainframe Logic. Since the business rules do not undergo any changes, the risk level drastically diminishes when it comes to integration. This gives integration an edge over all the other modernization techniques.

Conclusion

Since the risk involved is less in Mainframe Integration, it is the most of preferred modernization technique for midsize mainframe (5000-10000) MIPS and large mainframe (over 10,000 MIPS).  The current trend analysis shows that mainframe owners are implementing Mainframe Integration, even though the Integrated Application portfolio will have a mixture of technologies.

Wednesday, 11 February 2015

Branding in the Age of Social Media – Part II

By Uddeepta Bandyopadhyay


In the dark of the night, you want to reach home after a party. You call for a cab using your mobile app. A pleasant gentleman turns up and drops you home safely. In return, you post a positive feedback on the social media. Or, you order for a swanky smart-phone on a famous online portal. After you received the package, you open to find a faulty phone, again you go and post a feedback in their followers’ page....oops this time it is not so positive!

Sounds too complicated? Let us take help of a close to reality story to make it more fun.

Our protagonist is Linda is a successful entrepreneur. She is the CEO of Sporty, a famous outlet selling sports- and fitness-wear. Linda holds an MBA degree from an Ivy League recognized institute. Since its inception, Sporty has been highly successful in selling high quality sports-wear that has reflected in a profitable business.

After Sporty has had Linda as its CEO, the company has witnessed a double digit growth, thus making it one of the market leaders in the fitness-wear segment. For four consecutive years, Sporty has never had a set-back, and thanks to Linda’s efficient business techniques!

Initially, Sporty began with just 10 outlets, but today, it boasts of 45 stores all over the United States. The company has also begun expanding across various geographies and expanding its business footprint all over the globe.

This year has also proved good for Sporty, with the company experiencing exponential growth from the beginning of the year. Yet, over the last few days, Sporty’s sales graph has taken a dip. With each passing day, this depreciated growth figure seems to be prominent, which will surely get Linda worried. While Linda’s family vacation to India is all set for the next month, the dipping graph of her company may take her by surprise. The only solution is to fix it as quickly as possible so that she can not only have a sigh of relief about her company’s performance, but can also enjoy her long planned vacation.



All the top notch executives of Sporty are back to drawing board. They are determined to stop this sudden fall in sales.

First thing first, what actually caused this problem? The sales managers, the store managers, marketing, or PR, no one has any idea what actually caused it. But, it is clear that for some reason, customers are not turning up. Why so?

Emilia and Sunil, stars of the sales and marketing departments, have prepared a report on the topic together in just two days. They have gone through all the relevant data in their ERP, all the reports from the in-house BI system, quality reports, packaging standards, everything. As Linda has stated, there is no problem with respect to the inside process. There is something else that is not right and probably the clue lies outside. Perhaps, a full-fledged customer interview will take long time, it is expensive and success is not guaranteed.

Subsequent to a lot of discussion, George suggested something. George heads the IT department; he is calm and poised, and talks only when required. He had heard of an Indian company during a data science conference. It gave him an impression that their Social Sentiment Analysis Service can give Sporty a direction. After a brief discussion, it is all thumbs up. Within the next 48 hours, the Indian team signed in.

The Indian Company has assigned Prakash Raj, a young and dynamic project manager for this assignment. Above anything, Linda liked his enthusiastic spirit and cheerful attitude. Within a few hours, Prakash and his team got started.

Prakash set up a Sentiment Analysis Tool for Sporty in Sporty’s server (as Sporty desired). After a discussion with the stores manager’s and the sales team, Prakash quickly figured out that before Thursday (a fortnight back), the trend was normal. Now the question was - what happened after Thursday? With few clicks, Prakash’s team set up a data ingest job for all the online media, which relate or connect groups that can be Sporty’s customers. It is now almost 40 minutes since the data ingest started. Linda could feel she is a bit nervous, and decided for a quick coffee break.

In the cafeteria, her thoughts took her back to the great days of Sporty. In her memory, she remembered the initial days of struggle and the efforts they had put forth to build the company. And now, all of a sudden, everything was at stake. She was brought back to reality by Albert, the sales head. Prakash now had some definite clue.

Through sentiment analysis, Prakash had figured out that the problem started with a poster in store 32, a local promotion poster, which offended some group. The members of the group posted about it on social media and the comments spread like wild fire. Clearly, the drop in the footfall was a conscious decision by their customer base.

The problem was clear. Now it was action time for the PR. An official clarification went to the press and with a proactive approach and with information from the Sentiment Analysis Tool, it reached the intended audience quickly. And to Linda’s relief, the sales graph was again up north.

A month later while flipping through her page turner in a famous resort in Goa, she received a text message from Albert - “Sporty is opening its 50th store today”.

Linda chuckled, “India is a cool place indeed”…..

In the above case, the Sporty brand was at stake due to some event, which was proliferated by a viral expansion loop. If there were no action taken, it would have spiraled out of control. As we see in the above scenario, appropriate tools and strategy are a must to control the viral expansion loop and use it in your favor.

Are you ready yet? Do you have the right tool, expertise and strategy in place? It’s time to introspect.

Welcome to the New Age of Social Media!

Read Part I at http://spansys.blogspot.in/2015/02/branding-in-age-of-social-media-part-i.html

Branding in the Age of Social Media – Part I

 By Uddeepta Bandyopadhyay


Post the social media expansion, the world is a maze of an interconnected net, which can make or break a brand in no time. To understand this completely, let us take a look into the past, about two decades ago. It was the age of PCs to connect to the internet, and many of us used a dial up connection. A known joke among surfers was, if one wants to open a web page, rich with image, just type the URL, take a cup of coffee, sit back and relax. If you are lucky, the page may open up by the time your coffee is over!

The internet was a phenomenon as compared to the present day; it is a basic utility. It helped people gain knowledge, but not connect like today. For marketers, the tool to reach their audience was limited to yellow pages, postal mails and television. The idea was to create a splash so that your audience could recall your brand when they are actually buying. You had no option to measure the receptiveness of your brand in the decision making process. Previously, marketers used to know whether their marketing efforts doomed or bloomed, with the subsequent result of a significant investment already made. To grab the attention of consumers on a large scale, repeated ads in newspapers or ad slots on TV during prime time were the main marketing pursuits.

Today, TV ads are still the primary means to publicize, but, their success or failure is not confined only to TV viewership. Once launched, a TV ad is dissected, shared and made popular by the social community on the internet, which draws attention of several active individuals online to YouTube/TV to watch the ad and popularize it. Hence, the way marketing is implemented has definitely changed. We can now safely divide them into two segments:
  • Viral marketing 
  • Viral expansion loop 
For the first option, people pass on the word of your brand to others, as they may either be interested or unhappy about it.

In the second option, persons obtain value from your brand by engaging and motivating their entire personal and professional community to either use it or not to use it.

Which one do you think is more interesting? The second option? But, it is not easy.

For that, you must have a social strategy and the right tools to measure the effectiveness of your brand strategy from the beginning and change your track in between, if needed. 

Look forward for a more interesting read, covering reality stories in Part II…..



Read Part II at http://spansys.blogspot.in/2015/02/branding-in-age-of-social-media-part-ii.html

Testing Stories from SPAN’s Trenches (Part 4)

By Lakshminarasimha Manjunatha Mohan




Story 4 - Principled Work Culture with Good Process for the Context Yields





Context: 

A Nordic bank with net banking, net loan and net agreements applications was utilizing a security consultant firm to perform Vulnerability Assessment and Penetration tests. This time, SPAN was given a chance to demonstrate its skills. The same applications were given to both SPAN and the consultant firm to carry out penetration tests on the same day with an intention of evaluation.

Description:

It was just another opportunity to showcase capabilities for SPAN. We were conducting penetration testing on the applications, and on the third day or so, we received a message from the System Owner that the other consultant firm had already submitted the test report to the bank. He was keen to understand our testing progress and more importantly check what and how many vulnerabilities we have found. We responded strongly that the testing is not complete. We are still have more tests to do and the tests are yielding many vulnerabilities. We took another two days to complete our testing and deliver the report. By then, our System owner was a bit anxious to know how our test went.

We received a response from the customer bank; we had 79 vulnerabilities reported out of which 63 were HIGH risk vulnerabilities related to OWASP Top 10, such as Cross Site Scripting, and the rest were low to medium risk vulnerabilities.

The consulting firm had reported 17 vulnerabilities, out of which, 13 were HIGH risk vulnerabilities. We were also informed that the vulnerabilities that we have reported included the 17 reported by the other consultant.

Finally, the customer had a question about how quickly we could solve the high risk vulnerabilities. It was a catch-22 situation; they had no much time left to go to the market and cannot go to without closing the security vulnerabilities. The decision was to find quick workaround solutions for the situation with minimal code changes or no code changes.

We recommended implementing a simple input sanitization filter as a quick fix to the problem and it was quickly implemented. The products were re-tested and released.

Take Home:

The process and procedures with the well-established methodology we followed ensured good testing, based on the risk analysis with Threat Models. It also helped us assist the customer to release their project/applications on time.

Read Story 1 at http://spansys.blogspot.in/2015/02/testing-stories-from-spans-trenches.html

Testing Stories from SPAN’s Trenches (Part 3)

By Lakshminarasimha Manjunatha Mohan






Story 3: Quick and Context Driven Solutions at Critical Times, Leading to Success







Context: 

A HRMS product from Sweden - the project scope was to migrate the existing COBOL based desktop application to .NET based web application. The solution was implemented by an automated verbatim code conversion from COBOL to VB.NET. As a consequence, there was limited knowledge about the application with the development team. The project was fixed priced, with any delay in release, resulting in weekly penalty. In this constrained environment, the challenge was to test the .NET application and ensure if it works exactly the same way as the COBOL application and integrate the product and make it ready for release. We had 4 weeks’ time to get about 8 modules tested, integrated and released for User Acceptance testing.

Description:

In the quick time to understand the application business, we conducted exploratory testing. Here, all the testing that was being done was checking against the COBOL application. So our test Oracle was quite simple; PASS in COBOL should also be PASS in .NET app and FAIL in COBOL app should be FAIL in .NET app, any deviation is declared to be a bug. Soon we realized that it is impractical to test the application manually and achieve the coverage within the available time to ensure the application quality. Moreover, we are very well aware that all the checking is for the machine and testing is for human.

We decided to implement a table driven hybrid automation framework that can deal with both, COBOL desktop application and .NET web application. This was not an easy task to take up in such a constrained environment, but, we decided to take this up with 3 expert testers. The result was that we had the automation checks that included comparing page to page on two versions of the application. With this, all the testing was shifted to night and we started testing every day’s implementation on the same day. With some extra work, we could start seeing the benefit right from the second day. Further, to help developers debug faster and integrate the modules with the system, we developed utilities for file comparison that was critical for 3-4 modules. For the reporting module, we implemented a utility that compares the data in the two databases and produces an output with the missing or extra data.

With all these, we could complete the testing with high coverage on the application and release it for UAT in an acceptable time limit. Overall, testing implementation helped the project to obtain complete control on the release.

Take Home:

Leveraging the experience and expertise of critical people possible and helped us to develop and deploy innovative solution at a critical time. This, eventually helped in completing the project on-time and more importantly, without loss.

Read Story 4 at http://spansys.blogspot.in/2015/02/testing-stories-from-spans-trenches_51.html

Testing Stories from SPAN’s Trenches (Part 2)

By Lakshminarasimha Manjunatha Mohan

Story 2 - Focused On Solving the Customer’s Real Pain Points, than Problem Symptoms – Overcoming the Constraints/Traps Imposed by Tools



Context:

A US based insurance company was automating its application testing with HP QTP. The functionality slated for testing was insurance quote generation in the form of PDFs. The customer’s side of testers had trouble automating the functionality, as QTP had no out of the box support for data extraction or validation from the PDFs. This was a critical scenario to be automated as there were many alternates to be tested. It was a tedious task of validating the calculations and values in the table. The automation testers at customer’s side ensured all the ground work by researching on different tools that could help them, and then approaching HP to help them with the PDF validation, yet, nothing worked in their favor. As one another try, they approached SPAN to help them with it.

Description:

We started with a short discussion on this problem and explored the issue. We strongly believe that tool is like a vehicle and automation is the skill like driving. When you know how to drive, it does not matter which car you are driving. Same was the case here; we believe that what is shown to us is the symptom of the problem and not the real problem. Thus, we started looking for the right problem that we are expected to solve. Eventually, we defined the problem that we need to address as “validate the insurance quote data in the form of tables against a set of expected values” rather than the PDF data validation.

With this definition of the problem, we started understanding the technical aspects behind this quote creation. In that case, they were using Apache Formatting Objects Processor (FOP), wherein the application was reading the formatting object (FO) tree and rendering the resulting page to PDF as output. This point helped us in moving forward with the solution. On exploration about the FOP, we found out that it is possible to capture the Area Tree XML, which is an internal representation of the resultant PDF document, in exactly the same layout of pages and contents.

This was enough for us to formulate a solution to the real problem. Our solution was to validate the insurance quote data in the Area Tree XML and to validate the PDF creation so that it depicted the data correctness we have exercised and also the quote creation in a PDF format. We implemented a VBScript that works within the QTP premise, validating the data in the Area Tree XML exactly the way it was required to validate the insurance quote in the PDF format. Further, we validated the existence of a new PDF file to check the creation of the insurance quote.

This solution solved the customer’s problem and the same solution is being used since more than 3 years and it is replicated for similar checks at other applications.

Take Home:

Many times, the symptom of problem is showcased as the problem, but, at SPAN, we take care to find the real pain point and strive to address that, rather than solving a symptom. In this case, the tool did not support extracting the data from PDFs, but the real problem that the customer had was not of the tool support, but of testing the insurance quotes. The solution that we achieved is exactly that.


Read Story 3 at http://spansys.blogspot.in/2015/02/testing-stories-from-spans-trenches_18.html

Testing Stories from SPAN’s Trenches (Part 1)

By Lakshminarasimha Manjunatha Mohan

At SPAN, we have been testing a variety of software since many years. Every software that we have tested has been very different and challenging because the context of testing has been different. Context drives the testing and this makes testing problems complex and different to address, every time. The testing context includes the technology stack, team, stakeholders (including the customer), available resources, artifacts, time, environment, etc. Here is a short collection of some of the testing stories that illustrate how we have efficiently handled different testing problems. For the customers and prospects, these stories illustrate how we test software at SPAN and for the learners; each of these stories is a gem teaching a lesson.



Story 1 - Collaboration and Team Work, Leading to Success in Time-Constrained Projects with No Time Commitment from the Customer






Context:

A Swedish product company had engaged SPAN to execute their flagship Content Management and Web Publishing product testing. The scope was to test the functionality of the web and APIs on a host of operating systems and browser combinations. It was a collaborated product development team with customer developers at Stockholm, Sweden and testing team at Bangalore, India. SPAN was being evaluated for its delivery capabilities during the first 4 months.

Description:

We started the project by going through the testing trail – Bugs, Product and the User. The first problem to solve was to test and change the status of 1000+ bugs that were logged by developers with a constraint that there was no time for the development team or anyone with the knowledge of the product to clarify the questions from testers. The challenge was to understand the bugs that were not described well enough. To add to these, there were several bugs that were not updated, but resolved in the application.

At this juncture, we ran weekly sprints in SCRUM, testing/learning the product in an exploratory fashion and testing the bugs. We did receive many questions during the course that were listed and further discussed internally. It was a classic display of team work and collaboration wherein, everyone in the team participated and discussed about the questions and further explored to find answers to the questions, helping one another. This continued in iterations and finally we had only 13 bugs/questions left for clarification by customer. Efficiently, we had managed to pass the evaluation and providing value to the customer even when there was limited documentation available and limited facilitation time by the customer.
           
Take Home:

The highly collaborated team work helped in bringing cohesion in the team and also helped each other find answers to the questions. The context driven exploratory testing approach helped the team to gain good understanding of the system even when there was little support from outside and not much documentation. Further, the customer started seeing the value of our work and automatically, we became their part of the team.


Read Story 2 at http://spansys.blogspot.in/2015/02/testing-stories-from-spans-trenches_11.html

Tuesday, 10 February 2015

IoT - The Big Game and the Challenge

By Nagesh Rao

There is a never ending revolution the world is seeing, particularly in the last four decades, when it comes to communication. The internet, which was initially meant for the military’s private data network, has created a revolution in communication and provided a perfect platform for further innovation.

In its early phase, it gave us the platform for email, hosting and browsing information. It provided the next wave of innovation when it gave us the platform for e-commerce and online business. Social media was the recent wave in the internet, which helped connect any one to anyone. And now, we are on the verge of another wave that is called as the Internet of Things.

The Internet of Things (IoT) is heralded as an innovation, which can revolutionize our lives. Systems that were working in silos will now work in a synchronous method. Products that once composed merely of electrical and mechanical components will become complex systems that combine hardware, network of sensors, microprocessors, software, and wireless or IP network. There is tremendous hype and excitement as this new technology trend is poised to connect billions of devices using the internet. Also, the data from devices can be converted to knowledge and intelligence, and then, the system can act on the intelligence with no or least manual intervention.

Gartner, forecasts that 4.9 billion connected things will be in use in 2015, up by 30 percent from 2014, and will reach 25 billion by 2020. Accenture has a prediction of industrial IoT contributing $14.2 trillion by 2030, which is echoed by GE and Cisco. However, it is not the forecast and the numbers, which are exciting, but it is the numerous opportunities this technology is opening up in a wide spectrum as never before that is generating interest in people. Be it tracking one’s freight, monitoring kids at school or on a picnic, or aged parents who are by themselves & far away in a town, or think about a smart city where the all the different modes of transport are linked, IoT can be of great assistance everywhere. Nearby, buses and taxis get an alert when the train is arriving in a local station. Inside the station, the escalator gets switched on automatically and even all the lights switches on in the station along with the display system. With IoT, things which we use to see in a fiction movie is about to become real!

However, with this excitement comes the question: ‘Is the technology ready?’ If so, what are the real concerns, which need to be addressed until we are confident to say that IoT is ready with a no hold barrier?

Infrastructure Working in Silos: An IoT implementation consists of interconnection of hardware, such as sensors and actuators, gateways and IOT servers which are part of software. When all these are put together, it is called as a platform. Though there are several companies, which are coming up with their own platforms, unfortunately, most of the platforms are more of the vertical stack rather than something which can benefit a wide range of consumers and developers working in various different domains. The full benefit of IoT cannot be leveraged until we have data in silos or closed systems. Open platforms need to be in place to allow and create niche applications in different verticals so that experts in different domains need not have to worry about the core IoT technology, but can use their skills to apply their vertical or business knowledge. The open platform need not be restricted to the software, but also the hardware, like having a common sensor network across cities!

No or Various Standards: Having standards are very important for interoperability between networks, sending data across platforms, and to ensure that any device gets connected to any platform without any modification in the software. Also, the standards need to be globally applicable and acceptable.

The problem with IoT is that, as of today there are 15 standards! This creates concern and confusion in an industry that is vast and multifaceted already. We see AllSeen Alliance, Open Interconnect Consortium, Thread Group, Industrial Internet Consortium and the latest entrant, oneM2M, backed by well-known names in the technology sector. Fortunately, all these standard groups and the companies have a common goal - to expedite the growth of IoT. If these groups are not successful in consolidating into one group and ultimately have one standard, it will be left to the “time” to address this problem. As it happened in the wireless industry, the winners of this disparate group will be judged based on the marketing and speed of delivery, along with their technical merit, equally.

Battery Life of the Device: Billions of devices, which were previously unconnected objects, will now be enabled to provide small amounts of data on a regular, perhaps infrequent basis. Unfortunately, all these devices are designed to run on batteries, making the maintenance of these devices not only a costly affair, but a very tedious exercise as well. Prolonged battery life that sources energy from unconventional power sources and a deviation from the standard power management technology than what is used in our embedded system is a must for future development for the Internet of Things. We already see some developments where microcontroller designers are working on ultra-low power devices, featuring extremely-low power hibernation states, capable of operating from very small amounts of energy, measured in nanoamps.

Privacy and Security: From the users’ perspective, the security of personal data is the most important aspect; this includes data that is captured in the public, such as images or behavioural traits or habits. Already, there is much talk about how 'smart' bins were used by over enthusiastic marketing firms to draw out data from the phones of passers-by through Wi-Fi signals without the user even knowing. And, even worse, a popular health tracking device measuring the steps and calories burned along with sleep allowed hackers to find the “nocturnal activities” of the users, apart from their sleep!

While the onus on the privacy issues like these is entirely on the application and its designers, there are other areas where security is of higher importance and needs to be attended to with much diligence. One such area is the data. The data needs to be protected while transmission and at the storage stage. The other area is related to the hardware, and this is something which requires a serious thought.

Though privacy and security is the most talked about topic today, this is something which can be addressed by the golden rule. The good thing is, the basic of internet security is still there and is very much applicable across major areas in IoT. For hardware, we need to ensure that there is no anonymity of the device. Each device connected should have an identity and mechanism in place, wherein unlocking a device risks only that respective device’s data and not any other device’s data in the network.

As Gartner has put IoT in the peak of the hype cycle, it is in the next few years, when technocrats and companies need to put their minds and effort to address all the issues so as to live up to the hype and expectation. This will certainly happen, as it happened with the other innovations in the past. There were hype, there were challenges, then the hype faded, technology got matured, and we all have benefited from it. That is what is going to take place in the sphere of IoT as well.