It is a well-known industry fact that production optimization has the potential to increase the production throughput on the average asset by 2-5%, simply by operating the wells better within the existing production constraints and without any additional investments in infrastructure. Then why are the success stories lacking? Why is production optimization still elusive and pretty much never implemented in a systematic way in day-to-day operations?
The traditional understanding of what production optimization should be, aiming to formulate the entire mathematical description of the problem with objective functions and all relevant constraints deterministically might be to blame. Our experience is that the operational realities of running an asset are so complex and constantly changing, that the perfect encoding of this problem has too much uncertainty and is too expensive to maintain at adequate quality in real-time.
Typically, an optimization algorithm tends to select a solution that exploits the most extreme regions of a model, which is natural when trying to maximize the best and minimize the worst. In our experience, the best and worst are often due to modeling errors or missing constraints, which emphasize the need for responsive interaction with the software for the user to reach a feasible and practical solution. Because of these complexities in interacting with an optimization system, it is of great importance that this interface be well designed. The clue is to connect the optimization dashboard to the underlying models, and then the models to the underlying data in a seamless manner. That way, the engineer can have a truly interactive workflow, changing inputs, reviewing the resulting optimal solution, digging back into models and data to gain situational awareness or troubleshoot, and generally arrive at some potential actions to take to increase production.
We believe production optimization needs a completely new approach to be successful in a systematic, sustainable way. Our approach is enabled by responsive user experience, continuous data-mining, machine learning models and a fresh perspective on what it means to optimize in real-time.
A simpler, more straightforward example of a value-adding optimization workflow is to continuously prioritize maximum production from the wells with the best output, given existing operational constraints. Let's assume that on a fictional asset, the main limiting constraint, as is typically the case, is the gas handling capacity of the production separator. In such a scenario, fancy optimization algorithms and complex formulations of the entire production optimization problem might not be necessary in order to achieve tangible improvements in production.
If the production engineer has access to quality assured, real-time estimates of the flow from each well, this problem simplifies to having a real-time updated prioritized list of wells according to their current gas-to-oil ratio (GOR). The lower the GOR, the higher on this priority list the well will appear. The production engineer can take a look at this list on a daily basis, or however frequently he or she considers making operational changes to optimize his or her production. All else being equal, the engineer just needs to adjust "spending" the limited gas handling capacity on the wells with the currently best or lowest GOR. No fancy optimization algorithm to maintain, just a pure trade-off decision.
This optimization workflow necessitates live well flow rate estimates on if not all, then at least a subset of the wells. Data-driven virtual flow metering is making big strides demonstrating impressive results while being cost-effective. Learn more about NeuralCompass, our data-driven VFM offering here.
Such a live prioritized list can even prove very valuable in many other decision making settings, for instance in the event of both planned and unplanned production reducing activities. This can be due to e.g. taking down one separator for maintenance purposes. The updated list will support engineers to quickly and soundly make decisions about which wells to take down or reduce in order to stay as optimal as possible during periods of reduced production capacity.
ProductionCompass AI fuels on Squashy, our data mining framework. One crucial type of dataset that can help automate and simplify production optimization is what we call a "pseudo-steady-state" dataset. This dataset is a collection or periods in time where the production at the asset has been approximately steady-state, with all asset controls fixed and untouched for the duration of those periods. Since this dataset is practically eliminating all periods of transient behavior, we are left with a dataset of approximately stable periods that can be compared to each other in terms of production performance. And production optimization is all about increasing the average throughput in exactly these periods of "normal" and "stable" production. We call these periods stable operating points.
Now imagine having this data mining algorithm run live on your asset and being able to visualize these periods and their averages in real-time. You will easily be able to see the operating points in your near (or far) history, and compare them visually to see which ones had the best performance. You can even interact with these periods and quickly see how they are different from your current operating point in terms of changes made to your controls.
Two such pseudo-steady-state operating points are chosen and highlighted in the screen below. In between these two operating points, changes have been made to one or more control settings at this asset. The user can easily see that the average production of gas has increased and the average production of oil has decreased as a result of the changes made to the controls between the periods. This augmented information is visualized on top of the underlying timeseries, seen in the background, making it easier for the human eye to catch what is actually going on.
In addition to giving the production engineer situational awareness about the operating points the asset has produced in, the engineer can use this information to quickly assess the effect of changes made to the control system.
With our action monitoring application, this information is readily available (in a matter of a few hours) instead of multiple days waiting for allocated daily rates to be available before and after the change was made. In this way, the engineer gets almost instant feedback on whether or not the change implemented led to an increase or a decrease in production. In the case of a decrease, the engineer can even reverse the change to avoid additional losses. This way of approaching daily production optimization takes some of the risk out of actually operating the asset in a more active way in order to chase the barrels more effectively.