Well testing with test separators

A use case displaying the value of our well-test application when executing well-tests on a test separator

Value proposition

  • Optimizing well test duration, to better utilize the test separator and increase the test capacity
  • Providing more confidence in well-test results by assessing uncertainty
  • Providing a structured source of truth for all historical well-tests

Situation

A common denominator for all assets is that they routinely perform well-tests. The duration of a test can typically range from hours to days, and usually requires that a single well is routed to a test separator. This introduces a capacity constraint on the number of tests you can run within a given time period as each well needs to be routed to the separator in turn.

When executing these tests many operators either run fixed length tests (e.g. 8 hours) or run the test until certain production parameters seem stable, often on visual inspection. In truth the parameters are subject to uncertainty that is difficult to measure with current solutions. And when the production parameters vary in cycles, as it often does for water, correct parameter calculation becomes even more challenging. This complicates the measurement of stability for these parameters and makes it difficult to trust the results.

Finally the results from well-tests are often stored ad-hoc in different spreadsheets with little or no possibility for easy access to historic sensor data that occurred during the test. This might be a risk as there is no single source of truth or quick access to a well-test database.

Solution

Together with our partners we have developed a solution for a well-test application as a module on our production optimization platform. Our algorithm automatically detects when a well is routed to the test separator and can start gathering relevant data.

The application displays all the information, close to real-time, the engineers need to know about the test. Starting time, current duration, configurable parameters relevant to the test, and measured uncertainty augmented on top of the raw data is conveniently presented in the application. When the uncertainties of predetermined measurements are below a given threshold the application lets the user know that the test can be ended.

All historical tests are managed in a structured well-test database that lets the users find relevant tests they are looking for. The application will also detect all historical tests, with the same rules it applies for tracking live tests, and stores them in the same database.

Outcome

With the use of our well-test application our partners have been able to address several of the pain-points associated with test separator well-testing.

The uncertainty tracking and alerting of when a test is below a given threshold has allowed our users to significantly optimize their well-testing. Wells with relatively low uncertainty can either cut test time, be tested more seldom, or both. This increases testing capacity and allows the engineers to focus on testing wells with higher uncertainty. Shorter total test cycles overall also lets the engineers test each well more often than previously.

By handling cyclic production with algorithms designed to compute statistics on time-series the application also leaves the production engineers with more confidence in the test results and better control of what the well is actually producing.

Each test is stored in the application, leaving the engineers with an accessible organized database of all historical test results. Opening the possibility for rapid analysis of how each well has developed over time, in one place.

Well test data is often the most reliable data we have about our wells’ behavior. Therefore, it is also the most valuable data in all types of model building, especially in data-driven well flow rate estimation. The test optimization module is a crucial part of the “test-to-rate” workflow, automatically providing qualified well-test data to NeuralCompass for rate estimation purposes.