July 10, 2017

Phase Change will bridge application knowledge silos

July 10, 2017

by Todd Erickson

Members of Phase Change's management team address how our technology will bring together an organization's siloed application knowledge to enable faster responses to market demands.

It's a paradox. Your most successful applications get larger and more complex with updates, upgrades, and new features until they become difficult to change and adapt. Now they are hard-to-manage legacy systems that cost ever more time and money to remain valuable.

One of the main reasons applications become difficult to maintain is that knowledge silos emerge – where various people in development and other departments understand small portions of the code, but no one person knows the entire code base.

Then when you bring people together to develop new features that will address market demands or opportunities, each contributor only knows his or her portion of the application code, each person has his or her own mental model of the code, and all of that knowledge is difficult to share.

Learn how Phase Change's assistive AI agent will bridge knowledge silos by understanding the entire code base, presenting a complete and accurate model, and collaborating with engineers and stakeholders.

Todd Erickson is a tech writer with Phase Change. You can reach him at [email protected].

June 26, 2017

Understanding code is the key to software development

June 26, 2017

By Elizabeth Richards and Todd Erickson

Discover how software developers are like archaeologists, and why understanding source code – the key to software development – involves a lot more than digging.

Software engineers are often called developers. However, according to a number of experienced programmers, they spend the great majority of their time (78%) simply searching and understanding existing software, and the rest of their time (22%) actually modifying legacy software or developing new applications.

Programmers are forced to spend so much time finding and comprehending code because current tools and techniques are not technically savvy, and they are too focused on the searching process. They don't help developers understand how the code works together within the systems they serve.

In fact, a recent blog post compared the source-code searching process to archeology. It's a reasonable analogy given that the tools developers use are only slightly more advanced than rudimentary shovels and brushes.

Why is software development – an activity that's driving incredible technological change – so far behind the curve in building tools that help programmers comprehend the code they work on?

Artifacts

Searching for ancient relics and specific lines of code are both complex processes.

An archeologist doesn't use a bulldozer and dig in random locations. She researches excavation sites and uses advanced technology, such as satellite imagery to find optimum exploration locales and ground-penetrating radar for mapping. Then she carefully and methodically removes topsoil while analyzing and recording each artifact down to the smallest pottery shard.

Every relic she unearths builds her knowledge of the site, and the people and culture she's investigating. For example, the archaeologist may assemble a handful of pottery shards into a serving dish, which she studies alongside other artifacts to better understand an ancient culture's family meal rituals. She can easily share the dish with other scholars and store it for future analysis.

Each discovered artifact may also modify how she approaches the rest of the dig.

Code

In software development, Professor Vaclav Rajlich asserts that the processes of searching and understanding code require two phases called concept location and impact analysis. Concept location involves finding the lines of code to be modified and the relevant but disjoined source surrounding it.

Impact analysis examines how a proposed modification will impact the entire application, including performance, stability, intent, and secondary consequences in distant modules. Poor or incomplete impact analysis can lead to more bugs.

In essence, a developer spends his time in active knowledge construction, building an integrated mental model so he understands how a system is constructed and its purpose and intended results. Only then can he be confident enough to make changes.

Errors logs, debuggers, and grepping assist developers in finding specific lines of code – his shards of pottery. But the developer must reconstruct the code into the mental models necessary for understanding the system. And these models remain locked away in the developer's mind, making them difficult to share and retrieve over time.

The greatest shovel ever invented

But it doesn’t have to be that way. Phase Change’s technology will transform how developers search and understand source code and applications. A programmer will no longer have to manually and laboriously search and play connect-the-dots to build mental models.

We are developing assistive artificial intelligence (AI) that automatically analyzes source code and understands the human intent behind it, creating immersive application-visualizations that resemble a programmer’s mental models. These visualizations can be stored, retrieved, and shared similar to how an archaeologist saves and shares the artifacts she assembles.

A developer will collaborate with our AI agent using natural language to effortlessly locate source and move quickly beyond simply identifying concept locations to performing comprehensive impact analysis. He will be aware of every effect his modifications bring about, and work confidently with a rich understanding of the code and the application.

Because the goal isn't to search, it's to understand.

learn more about our technology

Elizabeth Richards is Phase Change's director of business operations. You can reach her at [email protected].
Todd Erickson is a tech writer with Phase Change. You can reach him at [email protected].

April 24, 2017

Why Phase Change will fundamentally change software development – video

April 24, 2017

Gary Brach, Ken Hei, and Brad Cleavenger discuss how Phase Change's assistive AI technology will fundamentally change how software is developed so organizations can quickly and confidently respond to changing market dynamics.

While transformative advances in automation, communications' networking, and computer processing in the last 20 years have vastly improved business operations, the same cannot be said for software development.

The process of developing the applications that now run our daily lives hasn't significantly changed since the 1970s.

Sure, we've developed better tools and better ways of communicating with one another during the development process – such agile development techniques – but the underlying software development activities are the same.

This lack of substantial improvement makes it difficult for organizations to quickly respond to changing market dynamics.

However, the future of software development is bright. Organizations will soon be able to quickly and confidently respond to changing market dynamics.

Phase Change's technology will fundamentally transform how software is developed by introducing our assistive AI into the process – enabling organizations to quickly respond to market changes and opportunities.

Watch the following video below to learn why Gary Brach, Ken Hei, and Brad Cleavenger believe Phase Change's technology will fundamentally change the software development process.

March 29, 2017

Hosting microservices: cost-effective hardware options – blog

April 29, 2017

by Rahul Pandita and Todd Erickson

When we moved from being primarily focused on innovation to also developing a demo platform, our developers began to work with very different frameworks and libraries. As our interactions with more libraries and frameworks grew, we faced dev-setup issues with our monolithic architecture, including:

  • Installing and supporting multiple IDE environments within the single framework. Our developers were installing and maintaining libraries and frameworks locally that they would never need for their current tasks.
  • Software versioning. It's a project manager's nightmare keeping everyone in different teams on the same software versions.

We began to consider moving to a microservices platform, which would allow us to isolate our developers' working environments and segregate libraries and software applications.

Industry literature and Rahul's personal experience at North Carolina State University pointed to a shift away from monolithic architecture to a microservices architecture because it's more nimble, increases developer productivity, and would address our scaling and operational frustrations.

However, moving to a microservices architecture made us address the platform's own issues, namely, how do we access these services – using in-house servers or through third-party hosted platforms?

We first considered moving straight to cloud services through well-known providers such as Google Cloud, Amazon S3, and Microsoft Azure. Cloud computing rates have dropped dramatically, making hosted virtual-computing attractive.

However, at the time, we were still exploring microservices as an option and were not fully committed. Also, we still have to do a lot of homework before transitioning to the cloud. When we added security and intellectual property (IP) concerns to the mix, we decided on an in-house solution for the time being.

This blog post is about our process of determining which servers we would use to host the microservices.

Here we go

To quickly get up-and-running, we repurposed four older and idle Apple Mac Pro towers that were initially purchased for departed summer interns. We reformatted the towers and installed Ubuntu Server 16 LTS to make the future transition to the cloud easier because most cloud platforms support some version of Linux (Ubuntu) out-of-the-box.

The towers featured:

  • Intel Xeon 5150 2.66 GHz dual-core processors with 4 MB cache
  • 4 GB PC2-5300 667 MHz DIMM
  • Nvidia GeForce 7300 GT 256 MB graphics cards
  • 256 GB Serial ATA 7200 RPM hard drives

These towers were fairly old – the Xeon 5150 processors were released in June 2006. We started with them to prove out the approach and quickly determine the benefits without investing a lot of money up front.

Moving to a microservices model immediately solved many of our issues. First and foremost, it allowed us to separate our development environments into individual services.

For example, our AI engine for logic queries could work independent of our program-analysis engine and our text-mining work. This was incredibly helpful because, for example, developers working on program analysis who did not directly dealt with the AI engine didn't have to install and maintain AI-specific libraries, and vice-versa for AI developers and program analysis tools.

Now, each team simply interacts with an endpoint, which immediately improved our productivity. More on this revelation in a future post.

As we continued to implement the microservices platform, we were pretty happy with the results. Then our servers started showing signs of their technological age – performance lags, reliability issues, limited upgradeability, and increasing power consumption. The limited amount of DIMM, limited cost-effective upgrade capabilities, and constant OS crashes hampered our efforts.

Next step

For the next "phase" of our microservices evolution, we decided to acquire performant hardware specifically geared for hosting microservices.

Phase Change is a small startup with limited funding, so we had to purchase equipment that would meet our needs within a budget. Like many ‘cool’ startups, we are a Mac shop, so we naturally gravitated towards using Mac mini servers. We were already using Mac minis for file hosting, and there are plenty of websites detailing how to use them.

After conducting random Google searches extensive online research, we decided our best option was not the Mac mini with OS X Server, but the original Mac mini model. The Mac mini with OS X Server features an Intel Core i7 processor and dual 1 TB Serial ATA drives, but Apple stopped offering the mini with OS X Server in October 2014.

So, we considered the next best thing, mid-level original Mac minis that included:

  • Intel i5 3230M 2.6-3.2 GHz processors with 3 MB cache
  • Intel Iris Pro 5100 HD graphics cards
  • 8 GB 1600 MHz LPDDR3 memory
  • 1 TB 5400 RPM hard drives
  • 1000 Base-T Gigabit Ethernet support

The Mac mini form factor – 7.7 inches width by 1.4 inches height and 7.7 inches depth – and power consumption – 85 W maximum continuous power – were also appealing. The retail base price is $699. The cost-effective modern processors and increased memory were the most important factors in our consideration, and the tiny little Macs would integrate well into our 'cool' Mac company environment.

We were all set to move on the Mac minis until we found Russell Ivanovic's blog post, "My next Mac mini," which revealed that the Mac mini product line hasn't been updated since October 2014 – over 2.4 years, but Apple is still selling them at new-computer pricing. So much for the minis. Aargh!

Luckily, we didn't have to start at square one this time around, because Ivanovic's blog post revealed what he bought instead of the mini – an Intel NUC Kit mini PC.

Intel NUC KitWe asked Siri to do the math crunched the numbers and found that the NUC was a reasonable Mac-mini replacement. The Intel NUC Kits are mini PCs engineered for video gaming and intensive workloads. The base models include processors, graphics cards, system memory, space for permanent storage devices, peripheral connectivity ports, and expansion capabilities, but we upgraded our NUC6i7KYKs to include:

  • Intel Core i7 6770HQ 4.0-4.65 GHz quad-core processors with 8 MB LC cache
  • Intel Iris Pro 580 graphics cards
  • Crucial 16 GB (8 GB x 2) DDR4 SODIMM 1066 MHz RAM
  • Samsung 850 EVO 250 GB SATA III internal SSDs
  • 1000 Base-T Gigabit Ethernet support

The following table presents technical comparisons between the old Mac towers, the Mac mini, and the Intel NUC Kit.

Mac Pro Tower Mac mini Intel NUC Kit NUC6i7KYK Comments
Base Price $200-$300 $699 $569 Mac tower has been discontinued but you can still buy preowned hardware. We chose the mid-level Mac mini ($699) for comparison fairness.
Processor Intel Xeon 5150
2.66 GHz dual-core
Intel Core i5 3230M
2.6-3.5 GHz
dual-core
Intel Core i7 6700K
4.0-4.65 GHz
quad core
Processor comparisons
Xeon 5150 v. i5 3230M
Xeon 5150 v. i7 6700K
i5 3230M v. i7 6700K
You can update the Mac mini to an i7 processor for $300.
Graphics card Nvidia GeForce
7300 GT
Intel Iris Pro
5100 HD
Intel Iris Pro 580 Graphics card comparisons
Nvidia GeForce 7300 GT v. Intel Iris Pro HD 5100
Nvidia GeForce 7300 GT v. Intel Iris Pro 580
Intel Iris Pro had 5100 v. Intel Iris Pro 580
Apple hasn't officially released info on the Mac mini's exact graphics chipset, so we used specs fromEveryMac.com for comparisons.
RAM 4 GB PC2-5300
667 MHz
8 GB LPDDR3
1600 MHz
Crucial 16 GB SODIMM DDR3L
1066 MHz
Out-of-the-box NUC Kits do not include RAM. We installed 16 GB DDRL3 SODIMMs in our NUC Kits for $108 each.
The Mac mini is upgradeable to 16 GB for $200.
Storage 256 GB
Serial ATA
7200 RPM
1 TB
Serial ATA
5400 RPM
Samsung 850 EVO
1 TB
SATA III
internal SSD
Out-of-the-box NUC Kits do not include internal storage. We installed 250 GB SSDs ($109 each) for a good performance/capacity mix, but use a 1 TB SSD here for comparison fairness.
You can upgrade the Mac mini to a 1 TB Mac Fusion Drive (1 TB Serial ATA 5400 RPM + 24 GB SSD) for $200.
Comparison Purchase Prices (per unit) $200-$300 $1,399 $997 Mac mini upgrades: Intel Core i7 processor ($300); 16 GB LPDDR memory ($200); 1 TB Fusion Drive ($200)
NUC Kit Config Upgrades: 16 GB DDR3L memory ($108); 1 TB SSDs ($320)

Download the table in PDF

Our NUC Kits total price ended up being $786 per unit with the 16 GB SODIMM DDR3L RAM and 256 GB SSDs. If we had opted for 1 TB SSDs to match the standard capacity in the mid-level Mac mini, our price would have jumped to $997 per unit.

We chose the Intel NUC Kits over the Mac minis because of the NUC Kits' updated technology and overall better performance for the price. Putting together and installing Ubuntu Server 16 LTS on the NUCs was very straightforward.

Both units are fully configured and have been in full production operation for a few weeks. We haven’t encountered any issues. I'll divulge more on how they perform over time with different microservices and workloads in future blog posts.

P.S. We still looooove Mac towers and we are currently using them as test beds. That will also be the subject of a future blog post.

Rahul Pandita is a senior research scientist at Phase Change. He earned his Ph.D. in computer science from North Carolina State University. You can reach him at [email protected].

Todd Erickson is a tech writer at Phase Change. His experience includes content marketing, technology journalism, and law. You can reach him at [email protected].

Contact

651 Corporate Circle
Suite 209A
Golden, Colorado 80401
Phone: +1.303.586.8900
Email: [email protected]

© 2024 Phase Change Software, LLC