10 software development predictions for 2018
Developers should be burning with excitement about the opportunities ahead in 2018, with products and tools around technologies such as blockchain, chatbots, serverless functions, and machine learning becoming mature enough for real-world projects. At the same time, many developers will be worried about holding up against the pressure to deliver code and functionality faster without compromising security or performance. But there is good news on that front as well.
For developers, 2018 will be defined by this tension between seizing transformative new opportunities while coping with the pressure to do more, with higher quality. Below are 10 predictions related to how those forces will play out in the year ahead.
1. B2B transactions leveraging blockchain go into production
Businesses have begun to understand the security, reliability, and efficiency to be gained from blockchain-enabled transactions. Developers will implement many blockchain use cases across financial services and manufacturing supply chains in the coming year. Blockchain is a technology that enables efficient, secure, immutable, trusted transactions among organizations that might not fully trust each other, eliminating intermediaries.
Consider a company ordering products from an offshore manufacturer. These products get shipped via a shipping company, come through customs, through another shipping company, and finally to the buyer. Today, the verification and reconciliation of each step mostly happens through emails and spreadsheets, with a lot of people and processes involved. Blockchain eliminates manual processes and reconciliation by irrevocably recording updates to the blockchain ledger when a minimum number of parties say, “Yes, this part of the transaction happened.”
Blockchain cloud services will bring scalability, resiliency, security, and pre-built integrations with enterprise systems, making it much easier for developers to focus on the business use case as opposed to underlying hyperledger fabric implementation.
2. Chatbots routinely have real conversations with customers and employees
People are getting tired of needing multiple mobile apps to do the same job—like three different airlines apps with different ways to check in and get a boarding pass. A better way is to provide that same functionality but via the most popular app on your phone—messaging. Messaging has three attractive elements consistent across the medium: instant, expressive, and conversational – no training needed. Thanks to advances in artificial intelligence and natural language processing, people will use Facebook Messenger, Slack, WeChat, WhatsApp, or a voice assistant like Amazon Alexa or Google Home, to ask questions and get answers from intelligent bots.
Developers, using new intelligent bot-building cloud services, can quickly craft bots that understand the customer’s intent, maintain conversational state, and respond intelligently while making integration with back-end systems easy. Imagine taking a picture of a dress you saw in a movie and messaging the image to your favorite clothing store’s bot, which uses image recognition and AI to recommend similar dresses. Employees could also be huge beneficiaries of bots for tasks such as asking how many vacation days they have left, filing a help desk ticket, or ordering a replacement laptop, where the system even knows which laptops the employee is eligible for and can provide status updates on their order. Given it is much more forgiving to experiment with your own employee base, developers might first leverage their bot-building chops to build and test employee-facing bots.
3. The button disappears: AI becomes the app interface
AI becomes the UI, meaning that the synchronous, request-response model of using apps and services gradually disappears. Smartphones are still “low IQ,” because you have to pick them up, launch an application, ask for something to be done, and eventually get a response. In a new generation of intelligent apps, the app will initiate interactions via push notifications. Let’s take this a step further where an app, bot, or a virtual personal assistant using artificial intelligence will know what to do when, why, where, and how. And just do it. Two examples:
- Expense approvals app watches your pattern of approving expense reports, starts to auto-approve 99 percent of expense reports and only brings to your attention the rare report that requires your attention.
- Analytics app understands the underlying data, questions asked so far by the business user, questions asked of the same dataset by other users in the company, and each day provides a new insight that the analyst might not have thought of. As organizations gather more data, AI can help us learn what questions to ask of the data.
Developers need to figure out what data is really important to their business application, how to watch and learn from transactions, what business decisions would most benefit from this kind of proactive AI, and start experimenting. Embedded AI can predict what you need, deliver info and functionality via the right medium at the right time, including before you need it, and automate many tasks you do manually today.
4. Machine learning takes on practical, domain-specific uses
Machine learning is moving from the realm of obscure data science into mainstream application development, both because of the ready availability of pre-built modules in popular platforms, and because it is so useful when dealing with analysis across large, historical datasets. With machine learning, the most valuable insight comes with context — what you’ve done before, what questions you’ve asked, what other people are doing, what’s normal versus anomalous activity.
But to be effective, machine learning must be tuned and trained in a domain-specific environment that includes both the datasets it will analyze and the questions it will answer. For example, machine learning applications designed to identify anomalous user behavior for a security analyst will be very different from machine learning applications designed to optimize factory robot operations, which may be very different from those designed to do dependency mapping of a microservices-based application.
Developers will need to become more knowledgeable about domain-specific use cases to understand what data to gather, what kinds of machine learning algorithms to apply, and what questions to ask. Developers will also need to evaluate whether domain-specific SaaS or packaged applications are a good fit for a given project, given the fact that large quantities of training data are required.
Using machine learning, developers can build intelligent applications to generate recommendations, predict outcomes, or make automated decisions.
5. DevOps moves toward NoOps
We all agree devops is critically important for helping developers build new applications and features fast, while maintaining high levels of quality and performance. The problem with devops is developers needing to spend 60 percent of their time on the ops side of the equation, thus cutting into the time devoted to development. Developers are having to integrate various continuous integration and continuous delivery (CICD) tools, maintain those integrations, and constantly update the CI/CD tool chain as new technologies are released. Everyone does CI, but not too many people do CD. Developers will insist on cloud services to help the pendulum swing back to the dev side in 2018. That will require more automation for real CICD.
Docker gives you packaging, portability, and the ability to do agile deployments. You need CD to be a part of this Docker lifecycle. For example, if you are using containers, as soon as you commit a code change to Git, the default artifact built should be a Docker image with the new version of the code. Further, the image should automatically get pushed into a Docker registry, and a container deployed from the image into a dev-test environment. After QA testing and deployment into production, the orchestration, security, and scaling of containers should be taken care of for you. Business leaders are putting pressure on developers to deliver new innovations faster; the devops model must free up more time for developers to make that possible.
6. Open source as a service accelerates consumption of open source innovation
The open source model remains one of the best engines of innovation, but implementing and maintaining that innovation is often too complex. For example:
- You want a streaming data/event management platform, so you turn to Kafka. As you start leveraging Kafka at scale, you must set up additional Kafka nodes and load balance large Kafka clusters, update these clusters as new releases of Kafka come out, and then integrate this service with the rest of your environment.
- You want Kubernetes for container orchestration. Instead of taking care of upgrades, backups, restores, and patches for your Kubernetes cluster, the platform should do all of that for you. Kubernetes ships every six weeks, so the platform should have rolling deployments and self-healing.
- You want Cassandra for NoSQL databases. You should want the backup (incremental or full on a schedule), patching, clustering, scaling, and high availability of the Cassandra cluster to be managed by the platform.
Developers will increasingly look for cloud services to deliver all of that high-speed innovation from open source while taking care of operational and management aspects of these technologies.
7. Serverless architectures go big in production
The appeal of serverless architectures is clear: When there is demand for my code to be executed based on a certain event, infrastructure is instantiated, my code is deployed and executed, and I am charged only for the time my code runs. Let’s say you want to build a travel booking function to book/cancel flights, hotels, and rental cars. Each of these actions can be built as a serverless function written in different languages such as Java, Ruby, JavaScript, and Python. There is no application server running with my code on it; rather the functions are instantiated and executed on infrastructure only when needed.
For developers, stringing serverless functions together to execute complex transactions creates new challenges: describing how these functions should be chained together, debugging distributed transactions, and determining how, on failure of one function in a chain, to create compensating transactions to cancel inappropriate changes. Look for cloud services and open source tools, like the FN project, to flourish by helping developers to easily manage the programming, composition, debugging, and lifecycle management of serverless functions, and to deploy and test them on a laptop or on-prem server or any cloud. The key is going to be picking a serverless platform that provides maximum portability.
8. The only question about containers becomes “Why not?”
Containers will become the default for dev/test work and commonplace for production applications. Expect continued improvements in security, manageability, orchestration, monitoring, and debugging, driven by open source innovations and industry standards. Containers provide the building blocks for many of the trends driving modern development including microservices architectures, cloud-native apps, serverless functions, and devops.
Containers won’t make sense everywhere —for example, when you need a more prescriptive cloud platform, such as an integration PaaS or a mobile PaaS—but these higher level cloud services will themselves run on containers, and will be the exceptions that prove the rule.
In addition, software licensing models for high-value, commercial, on-premises software will have to embrace the spread of container adoption. Pricing models for software will have to support “turn on” and “turn off” licensing as containers are instantiated, scaled up, and scaled down.
9. Software and systems become self-healing, self-tuning, and self-managing
Developers and production operations teams are drowning in data from logs, web/app/database performance monitoring and user experience monitoring, and configuration. In addition, these various types of data are siloed, so you must bring many people into a room to debug issues. Then there is the issue of knowledge transfer: Developers spend a lot of time telling production ops the ins and outs of their applications, what thresholds to set, what server topologies to monitor for a transaction, and so on.
By aggregating large amounts of this data into one repository (across logs, performance metrics, user experience, and configuration, for example), and applying lots of compute capacity, machine learning, and purpose-built algorithms, cloud-based systems management services will ease performance/log/configuration monitoring significantly. These cloud services will establish baselines for thresholds by watching transactions (sparing the ops team from having to manage thresholds), and understand the server topology associated with transactions automatically. Using anomaly detection against these baselines, systems management services will automatically be able to tell developers when things are moving away from normal behavior, and be able to show the root cause of problems for a specific transaction.
Developers will need to think about how to leverage this automation when writing their applications to be able to create self-managing applications on top of these intelligent management systems in the cloud.
10. Highly automated security and compliance efforts become a new ally of developers
While developers often think of security and compliance as “someone else’s job” or “bottlenecks to delivering code,” the advent of comprehensive security and compliance regimes based on machine learning and delivered as SaaS will help align these efforts with the fast pace of development. Specifically, highly automated cyber defense will be deployed both “upstream” to identify and remediate potential security risks in development and “downstream” to automatically adapt a company’s security profile to ongoing application and environment changes (identifying attacks, remediating vulnerabilities, and assessing continuous compliance) in production.
Such protections will be required in some cases, with continuous compliance assessment a hallmark of GDPR and similar mandates. Developers, security professionals, and end-users will all benefit from a more rigorous, automated approach to security throughout the devops lifecycle.
Siddhartha Agarwal is vice president, product management and strategy, for Oracle Cloud Platform.
—
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.
Source: InfoWorld – Cloud Computing
Recent Comments