May 25, 2017

Why is COBOL cool again? – blog

May 25, 2017

by Todd Erickson and Elizabeth Richards

Discover why the recent spotlight on COBOL systems and the shortage of qualified COBOL programmers isn’t due to a lack of qualified engineers, it's due to a lack of knowledge.

At Phase Change, we pay attention to legacy systems and their challenges. So, why was a mainframe language developed in 1959 suddenly the topic of multiple news articles?

Reuters and The New Stack recently published articles about COBOL, an often-overlooked programming language that was developed before John F. Kennedy became the 35th President of the United States.

Organizations like the Department of Veteran Affairs and large financial companies, such as Bank of New York Mellon and Barclays PLC, are examples of the types of institutions that rely on COBOL applications for nearly $3 trillion worth of daily transactions. But they’ve used COBOL for decades, so, that doesn't explain the recent attention.

A little history

The U.S. government developed the common business oriented language (COBOL) in conjunction with Rear Admiral Grace Hopper and a coalition of industry and higher-education envoys. It's simplicity and portability have stood the test of time, and are the main reasons why 50-year-old COBOL applications continue to play a critical role in finance, banking, and government operations. That plus the inertia that characterizes large, critical systems.

It's because the engineers that maintain COBOL-based systems are leaving the workforce, there aren't qualified developers available to replace them, and these institutions are freaking out. The COBOL brain drain is threatening the organizations that economies are built upon.

Brain drain refers to how departing software engineers leave with all of their system and domain knowledge supposedly locked away in their brains. That knowledge is thought to be lost from the organization forever.

The average age of a Cobol programmer is somewhere between 45 and 60 years old and they are retiring. The problem is that few programmers are interested in replacing them, and the availability of COBOL training resources has dropped precipitously because it's just not a cool language anymore.

We won't repeat all of the statistics that show how much COBOL code is still in use and how important those systems are. Read the Reuters and The New Stack articles, which both mirror a series of comprehensive feature articles published by ComputerWorld in 2012. The metrics and themes haven’t changed much.

Basically, these companies have three options to deal with Cobol brain drain, and all involve high risks. First, they can simply replace their COBOL systems with systems built on more modern programming languages. That project took the Commonwealth Bank of Australia 5 years and $749.9 million, which was 30% over budget. The risk associated with implementing such a massive new system has kept most financial institutions from doing it.

Second, they can engage consultants like the Cobol Cowboys, or hire and train new programmers to support their COBOL systems. This option also involves a great amount of risk because companies have to find engineers that have the skills and interest to support COBOL applications, and then hope they can unravel the layers of modifications and system integrations that accrue with five decades of maintenance, usually with little documentation.

Third, they can completely stop modifying core systems that nobody understands, but are too critical to risk changing or replacing. The USDA faced that choice.

It's not a people problem

But from our perspective, the issue is not a human-resources problem. The companies that rely on Cobol-based systems don't lack the right people, they lack the right knowledge.

If the new engineers assigned to work on COBOL-based applications could access the departing developers' system and domain knowledge, or better yet, all of the programming and domain knowledge imbued into the system from prior engineers, imagine how much easier it would be for them to comprehend these complex systems. It would be like having a personal mentor always available — even while the previous engineers are off enjoying retirement.

That's why this is a knowledge problem and not a people problem.

It's a huge opportunity for someone that can reach all of that trapped knowledge and make it easily comprehensible.

Exploiting the knowledge left behind

Phase Change's objective aim is to use our assistive AI technology to unlock all of the trapped programming and domain knowledge – as well as the human intent behind it – inside software applications, no matter which programming languages were used to create them, and make it easy to access that knowledge with natural-language interaction.

Engineers and stakeholders will literally talk to their software applications to reveal the hidden encoded knowledge they require to comprehend the overwhelming scale and complexity resulting from decades of modifications and system mergers, and hundreds of contributing developers.

Unlocking the encoded knowledge that's trapped in COBOL system will give these large institutions the knowledge they need to make informed decisions about their legacy systems.

learn more about our technology

Todd Erickson is a tech writer with Phase Change. You can reach him at [email protected].
Elizabeth Richards is Phase Change's director of business operations. You can reach her at [email protected].

April 10, 2017

Prevent software application knowledge from walking out the door – blog

April 10, 2017

by Todd Erickson, Tech Writer

Brain drain is a serious problem facing organizations that use software applications to run their businesses. Learn how you can seal the drain and retain all of the knowledge trapped in your applications.

At the end of every workday, your software development teams walk out the door with all of their knowledge leaving with them. Some of them don’t come back, and that loss of information and expertise, or brain drain, is a growing business problem, especially with IT industry turnover rates hovering between 20-30% annually.

Consider how much knowledge your organization loses when key members of your development team retire or join other companies. Not only do you lose development expertise, but the knowledge your engineers have regarding how your software applications work, such as:

  • How the system is architected
  • The subject-matter expertise used to implement functionality
  • The business considerations that drove product and feature designs
  • How third-party and external systems are integrated

The plight of developing and supporting older and large-scale applications is exacerbated when companies have to scramble to replace retiring software engineers with unqualified replacements. Multiple reports suggest that 10,000 Baby Boomers walk out the corporate door in the U.S. for good every day.

Many of these retirees are the software engineers that developed and maintain the many systems that still run on Cobol and other mainframe programming languages. The impact of losing thousands of mainframe engineers and their vast programming and business knowledge will be widespread. The 240 billion lines of Cobol code running today power approximately 85 percent of all daily business transactions worldwide.

Most organizations don't have the processes in place to capture their employees' business and system intelligence before they leave for good.

It’s especially difficult for engineers. Today’s software tools don't allow them to easily convey their expertise to others – or enable developers, business managers, and executives to easily discover and utilize any previously shared knowledge.

What can you do?

You might be surprised to discover that your engineers’ domain and system knowledge already resides in one other place outside their minds – your software. While creating the code, development teams pour their organization, programming, and business intelligence into your applications.

Imagine what you could do if your organization's technical and business stakeholders had access to all of the knowledge and human intent embedded in your software applications. Imagine asking your software application how it works and having it answer you back.

How can you unlock all of that untapped knowledge?

Liberate encoded knowledge

Phase Change Software is creating AI-assistive technology that unlocks the encoded knowledge embedded in your software applications.

Our assistive AI understands your software and turns it into formal units of knowledge. In essence, software is transformed into data.

Our AI assistant will liberate your software's hidden knowledge and help it understand itself. Our natural language processing (NLP) techniques will enable your technical and business stakeholders to easily interact with applications.

You will soon be able to literally have a conversation with your software, and have it teach you its encoded programming, business, and domain knowledge.

learn more about our technology

 

 

Todd Erickson is a tech writer with Phase Change Software. You can reach him at [email protected].

March 29, 2017

Hosting microservices: cost-effective hardware options – blog

April 29, 2017

by Rahul Pandita and Todd Erickson

When we moved from being primarily focused on innovation to also developing a demo platform, our developers began to work with very different frameworks and libraries. As our interactions with more libraries and frameworks grew, we faced dev-setup issues with our monolithic architecture, including:

  • Installing and supporting multiple IDE environments within the single framework. Our developers were installing and maintaining libraries and frameworks locally that they would never need for their current tasks.
  • Software versioning. It's a project manager's nightmare keeping everyone in different teams on the same software versions.

We began to consider moving to a microservices platform, which would allow us to isolate our developers' working environments and segregate libraries and software applications.

Industry literature and Rahul's personal experience at North Carolina State University pointed to a shift away from monolithic architecture to a microservices architecture because it's more nimble, increases developer productivity, and would address our scaling and operational frustrations.

However, moving to a microservices architecture made us address the platform's own issues, namely, how do we access these services – using in-house servers or through third-party hosted platforms?

We first considered moving straight to cloud services through well-known providers such as Google Cloud, Amazon S3, and Microsoft Azure. Cloud computing rates have dropped dramatically, making hosted virtual-computing attractive.

However, at the time, we were still exploring microservices as an option and were not fully committed. Also, we still have to do a lot of homework before transitioning to the cloud. When we added security and intellectual property (IP) concerns to the mix, we decided on an in-house solution for the time being.

This blog post is about our process of determining which servers we would use to host the microservices.

Here we go

To quickly get up-and-running, we repurposed four older and idle Apple Mac Pro towers that were initially purchased for departed summer interns. We reformatted the towers and installed Ubuntu Server 16 LTS to make the future transition to the cloud easier because most cloud platforms support some version of Linux (Ubuntu) out-of-the-box.

The towers featured:

  • Intel Xeon 5150 2.66 GHz dual-core processors with 4 MB cache
  • 4 GB PC2-5300 667 MHz DIMM
  • Nvidia GeForce 7300 GT 256 MB graphics cards
  • 256 GB Serial ATA 7200 RPM hard drives

These towers were fairly old – the Xeon 5150 processors were released in June 2006. We started with them to prove out the approach and quickly determine the benefits without investing a lot of money up front.

Moving to a microservices model immediately solved many of our issues. First and foremost, it allowed us to separate our development environments into individual services.

For example, our AI engine for logic queries could work independent of our program-analysis engine and our text-mining work. This was incredibly helpful because, for example, developers working on program analysis who did not directly dealt with the AI engine didn't have to install and maintain AI-specific libraries, and vice-versa for AI developers and program analysis tools.

Now, each team simply interacts with an endpoint, which immediately improved our productivity. More on this revelation in a future post.

As we continued to implement the microservices platform, we were pretty happy with the results. Then our servers started showing signs of their technological age – performance lags, reliability issues, limited upgradeability, and increasing power consumption. The limited amount of DIMM, limited cost-effective upgrade capabilities, and constant OS crashes hampered our efforts.

Next step

For the next "phase" of our microservices evolution, we decided to acquire performant hardware specifically geared for hosting microservices.

Phase Change is a small startup with limited funding, so we had to purchase equipment that would meet our needs within a budget. Like many ‘cool’ startups, we are a Mac shop, so we naturally gravitated towards using Mac mini servers. We were already using Mac minis for file hosting, and there are plenty of websites detailing how to use them.

After conducting random Google searches extensive online research, we decided our best option was not the Mac mini with OS X Server, but the original Mac mini model. The Mac mini with OS X Server features an Intel Core i7 processor and dual 1 TB Serial ATA drives, but Apple stopped offering the mini with OS X Server in October 2014.

So, we considered the next best thing, mid-level original Mac minis that included:

  • Intel i5 3230M 2.6-3.2 GHz processors with 3 MB cache
  • Intel Iris Pro 5100 HD graphics cards
  • 8 GB 1600 MHz LPDDR3 memory
  • 1 TB 5400 RPM hard drives
  • 1000 Base-T Gigabit Ethernet support

The Mac mini form factor – 7.7 inches width by 1.4 inches height and 7.7 inches depth – and power consumption – 85 W maximum continuous power – were also appealing. The retail base price is $699. The cost-effective modern processors and increased memory were the most important factors in our consideration, and the tiny little Macs would integrate well into our 'cool' Mac company environment.

We were all set to move on the Mac minis until we found Russell Ivanovic's blog post, "My next Mac mini," which revealed that the Mac mini product line hasn't been updated since October 2014 – over 2.4 years, but Apple is still selling them at new-computer pricing. So much for the minis. Aargh!

Luckily, we didn't have to start at square one this time around, because Ivanovic's blog post revealed what he bought instead of the mini – an Intel NUC Kit mini PC.

Intel NUC KitWe asked Siri to do the math crunched the numbers and found that the NUC was a reasonable Mac-mini replacement. The Intel NUC Kits are mini PCs engineered for video gaming and intensive workloads. The base models include processors, graphics cards, system memory, space for permanent storage devices, peripheral connectivity ports, and expansion capabilities, but we upgraded our NUC6i7KYKs to include:

  • Intel Core i7 6770HQ 4.0-4.65 GHz quad-core processors with 8 MB LC cache
  • Intel Iris Pro 580 graphics cards
  • Crucial 16 GB (8 GB x 2) DDR4 SODIMM 1066 MHz RAM
  • Samsung 850 EVO 250 GB SATA III internal SSDs
  • 1000 Base-T Gigabit Ethernet support

The following table presents technical comparisons between the old Mac towers, the Mac mini, and the Intel NUC Kit.

Mac Pro Tower Mac mini Intel NUC Kit NUC6i7KYK Comments
Base Price $200-$300 $699 $569 Mac tower has been discontinued but you can still buy preowned hardware. We chose the mid-level Mac mini ($699) for comparison fairness.
Processor Intel Xeon 5150
2.66 GHz dual-core
Intel Core i5 3230M
2.6-3.5 GHz
dual-core
Intel Core i7 6700K
4.0-4.65 GHz
quad core
Processor comparisons
Xeon 5150 v. i5 3230M
Xeon 5150 v. i7 6700K
i5 3230M v. i7 6700K
You can update the Mac mini to an i7 processor for $300.
Graphics card Nvidia GeForce
7300 GT
Intel Iris Pro
5100 HD
Intel Iris Pro 580 Graphics card comparisons
Nvidia GeForce 7300 GT v. Intel Iris Pro HD 5100
Nvidia GeForce 7300 GT v. Intel Iris Pro 580
Intel Iris Pro had 5100 v. Intel Iris Pro 580
Apple hasn't officially released info on the Mac mini's exact graphics chipset, so we used specs fromEveryMac.com for comparisons.
RAM 4 GB PC2-5300
667 MHz
8 GB LPDDR3
1600 MHz
Crucial 16 GB SODIMM DDR3L
1066 MHz
Out-of-the-box NUC Kits do not include RAM. We installed 16 GB DDRL3 SODIMMs in our NUC Kits for $108 each.
The Mac mini is upgradeable to 16 GB for $200.
Storage 256 GB
Serial ATA
7200 RPM
1 TB
Serial ATA
5400 RPM
Samsung 850 EVO
1 TB
SATA III
internal SSD
Out-of-the-box NUC Kits do not include internal storage. We installed 250 GB SSDs ($109 each) for a good performance/capacity mix, but use a 1 TB SSD here for comparison fairness.
You can upgrade the Mac mini to a 1 TB Mac Fusion Drive (1 TB Serial ATA 5400 RPM + 24 GB SSD) for $200.
Comparison Purchase Prices (per unit) $200-$300 $1,399 $997 Mac mini upgrades: Intel Core i7 processor ($300); 16 GB LPDDR memory ($200); 1 TB Fusion Drive ($200)
NUC Kit Config Upgrades: 16 GB DDR3L memory ($108); 1 TB SSDs ($320)

Download the table in PDF

Our NUC Kits total price ended up being $786 per unit with the 16 GB SODIMM DDR3L RAM and 256 GB SSDs. If we had opted for 1 TB SSDs to match the standard capacity in the mid-level Mac mini, our price would have jumped to $997 per unit.

We chose the Intel NUC Kits over the Mac minis because of the NUC Kits' updated technology and overall better performance for the price. Putting together and installing Ubuntu Server 16 LTS on the NUCs was very straightforward.

Both units are fully configured and have been in full production operation for a few weeks. We haven’t encountered any issues. I'll divulge more on how they perform over time with different microservices and workloads in future blog posts.

P.S. We still looooove Mac towers and we are currently using them as test beds. That will also be the subject of a future blog post.

Rahul Pandita is a senior research scientist at Phase Change. He earned his Ph.D. in computer science from North Carolina State University. You can reach him at [email protected].

Todd Erickson is a tech writer at Phase Change. His experience includes content marketing, technology journalism, and law. You can reach him at [email protected].

March 6, 2017

An Analogy: Software AI and Natural Language — blog

March 6, 2017

Today's AI technology is amazing.

Only a few short years ago, only humans could interpret the meaning of text and speech. Now our cell phones understand our voices and language well enough to distinguish accents, metaphors, and sarcasm.

IBM's Watson supercomputer even understood Alex Trebek well enough to beat some of Jeopardy!'s® best players.

Computers achieve natural-language understanding through a series of logically consistent normalization steps -- starting with the processing of basic sounds to recognizing words and then understanding sentences.

If computers can understand natural language using logically consistent processes, shouldn't we be able to use similar processes to break down and normalize software?

In fact, shouldn't software be easier to normalize than the messy ambiguity of human communication?

The answer is yes.

Phase Change normalizes software source code into formal data types and organizes them into hierarchical structures that are probabilistically linked (horizontally and vertically). Our technology unlocks the vast domain and system knowledge embedded in software and makes it available to anyone involved in creating and supporting software.

To learn more about how Phase Change's revolutionary technology transforms chaotic code into coherent data and intractable software into artificially intelligent agents, read Steve Bucuvalas' paper: "An Analogy: Software AI and Natural Language."

Contact

651 Corporate Circle
Suite 209A
Golden, Colorado 80401
Phone: +1.303.586.8900
Email: [email protected]

© 2024 Phase Change Software, LLC