Software Architect / Microsoft MVP (AI) and Technical Author

C#, Elastic Search

How To: Resolving ‘Flood Stage Disk Watermark Exceeded’ Error When Running Elastic Search Locally

I’m new to working with Elastic Search.

I had it running on my local machine as Windows Service, but it started misbehaving when inserting data.

Despite initially working, any attempts to insert data into the instance repeatedly failed.

Documenting this here for future reference.


I stopped the Windows Service instance and ran Elastic Search from the command line using the batch file elasticsearch.bat from the bin folder.


After a while the console filled up with errors:


The error:

Flood stage disk watermark [95%] exceeded on [aRvzvtaFS2uzfc9yGzWKSQ][JMC-002][{SYSTEM_PATH}\elasticsearch-8.10.2-windows-x86_64\elasticsearch-8.10.2\data] free: 15.7gb[3.4%], all indices on this node will be marked read-only

The Fix and Updating Settings Using Postman

After some digging, I found that I had to run a couple of PUT requests against the Elastic Search instance.

You can see these here:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'

curl -XPUT -H "Content-Type: application/json"
http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'


Running each of these returns the following JSON:

    "acknowledged": true


You can see this here:

After updating both settings, data could be inserted into the instance.

That’s it!

Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.

Leave a Reply