Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recreate current rolling file immediately to log on file deleting #128

Open
asset-2 opened this issue Feb 4, 2020 · 15 comments
Open

Recreate current rolling file immediately to log on file deleting #128

asset-2 opened this issue Feb 4, 2020 · 15 comments

Comments

@asset-2
Copy link

asset-2 commented Feb 4, 2020

It is not a new issue. It partially interacts with #36, #87, #96.

Service uses an hour as rolling interval. Current log file has been deleted in-between. Logger waits till chosen interval ends and then only creates new file to log things. All events and messages are ignored during remained time. Expected behavior for me is to recreate file immediately and write to it, as soon as we have something to log.

Do you guys plan to implement this thing? What possible workarounds do you suggest?

@asset-2 asset-2 changed the title Recreate current rolling file to log on file deleting Recreate current rolling file immediately to log on file deleting Feb 4, 2020
@cocowalla
Copy link
Contributor

Do you mean you are manually deleting the log file during the period that Serilog would expect to be able to write to it?

@asset-2
Copy link
Author

asset-2 commented Feb 4, 2020

Yes, it's possible when you run your application in Docker container for example.

@cocowalla
Copy link
Contributor

Hmm, I'm not getting how running in a container deletes log files during writing - would appreciate if you could describe a scenario when this happens?

@asset-2
Copy link
Author

asset-2 commented Feb 4, 2020

No, 'running in container' does not delete log files during writing.
Application is running in Docker container. It performs logging into file. It's possible to get access to that log file and delete it manually. I was expecting that file will be recreated immediately, as I wrote before. But no. It creates a new one when rolling interval is over. In this scenario we lose a bunch of messages.

@cocowalla
Copy link
Contributor

It's possible to get access to that log file and delete it manually

OK, but why would you do that? 😄

I'm trying to figure out if this is a realistic, common thing to happen, or an edge case where you have to actively be trying to cause a problem for it to be an issue.

Also, as an aside, if you configure the sink using shared: false, it will hold an exclusive lock on the log file during writing, preventing you from deleting it.

@asset-2
Copy link
Author

asset-2 commented Feb 4, 2020

Also, as an aside, if you configure the sink using shared: false, it will hold an exclusive lock on the log file during writing, preventing you from deleting it.

I will give a try. But something tells me that this approach does not work.

OK, but why would you do that? 😄

Someone accidentally removes log file on the production... The data has been lost. At that point we have embarked investigation.
As you see, it is pretty realistic occasion. From my perspective it is pretty evident to logger to proceed without interruption. The client loses its upcomming data for remained rolling period when human/system deletes log file unintentionally or not . Well, I really do not know whether that thing fits well with the strategy that you guys keep.

@cocowalla
Copy link
Contributor

But something tells me that this approach does not work

I don't know about the intricacies of the Linux file system, but it absolutely works on Windows.

Well, I really do not know whether that thing fits well with the strategy that you guys keep

Just a little piece of unsolicited advice: being rude to people will not get you what you want from them, especially if they are maintaining OSS in their spare time, for free.

If I'm being honest, I'm not really seeing how this could be s a realistic or common problem to be dealt with by a logging framework. That said, it would be nice if we didn't lose messages in the rare event that somebody explicitly deletes a file that wasn't locked.

The most obvious ways to do this (e.g. checking if the file exists before every write) are likely to hurt performance badly, so we need to be careful about that.

Do you have any suggestions for an approach?

@nblumhardt
Copy link
Member

Hi all! @asset-2 thanks for raising this as a separate issue; I see it's related to #96 but not covered by it, guess we track these two things separately. Thanks for digging in @cocowalla!

Well, I really do not know whether that thing fits well with the strategy that you guys keep

Just a little piece of unsolicited advice: being rude to people will not get you what you want from them, especially if they are maintaining OSS in their spare time, for free.

I think you might be misunderstanding each other, I can see both interpretations of that statement and think that From my perspective is intended to put the later statement in context - does this fit with the goals of the project? (I'm not sure what the answer is, at this point :-))

#96 won't be an issue on Unix, as file "locks" are only advice on that platform, unlike Windows where a lock is enforced. Because of that, this ticket is something of an inverse to that problem.

I think we could attempt to implement this, but because the file will still "exist" from the app's perspective on Unix (it will just be unlinked from its name in the directory hierarchy, until the process closes the file handle), we can't do it in response to an exception and instead, this would necessitate some kind of monitor process, AFAIK (need to grab my copy of Kerrisk's book to confirm!).

I'm not sure it will be worth that amount of effort, but keeping an open mind if someone is interested in exploring it. To answer your original questions, @asset-2 👍

Do you guys plan to implement this thing?

Not at this point, sorry.

What possible workarounds do you suggest?

At this point it's probably an operational problem to solve (avoid accidentally deleting log files by using automated runbooks/scripts to manage production servers).

@asset-2
Copy link
Author

asset-2 commented Feb 5, 2020

@cocowalla @nblumhardt Thank you guys for the questions consideration.

@qiqo
Copy link

qiqo commented Apr 27, 2020

I need this feature and my use case is that I move logs to another storage on a regular schedule for a bank application, so we roll logs daily, but audit happens every 2 hrs, and we have to remove the processed logs out without restarting the application. For perspective, log4net and NLog does this, but we chose serilog because it just works - except for recreating deleted logs that it has handle to. I'm wondering if adding a conditional to check if a file exist would fix it.

@rspears74
Copy link

Hi, just wanted to chime in and say that I'm getting around this issue by calling Serilog.Log.CloseAndFlush(), deleting the file with System.IO.File.Delete(file) (after a System.IO.File.Exists(file) check), then recreating if I need to by calling my Log.Logger = new LoggerConfiguration()... code again. We upload log files with user-submitted error reports, and then immediately delete the file. We usually do this as the application is closing, but recreating seems to work just fine in my testing.

@esskar
Copy link

esskar commented Oct 13, 2020

Hi, is there a way to implement it? I think it's a realistic problem. think about: high availability application that cannot be started during operation hours; now the log disk is full. the infrastructure team clears all files to free up space.

@impr0t
Copy link

impr0t commented Dec 10, 2020

@cocowalla

Do you have any suggestions for an approach?

Is there a possibility to adopt a buffering style approach where entries enter a queue and flush after some limit is reached? It would reduce the checks required for file existence by a significant factor and could possibly leverage the existing buffering options available. Another approach may lay in leveraging filesystem hooks to determine if the file was deleted, and temporarily cache new entries until another file can be created and written to.

@cocowalla
Copy link
Contributor

Is there a possibility to adopt a buffering style approach where entries enter a queue and flush after some limit is reached?

@impr0t the Serilog File Sink already supportes buffered output. The default is not to buffer output, because you risk losing logs in the event of a crash. I think this is a pretty sane default.

Another approach may lay in leveraging filesystem hooks to determine if the file was deleted, and temporarily cache new entries until another file can be created and written to

Hmm, unfortunately I think any approach that risks losing logs is probably a non-starter.

@rolandwu777
Copy link

rolandwu777 commented Jul 30, 2023

Hi, is there a way to implement it? I think it's a realistic problem. think about: high availability application that cannot be started during operation hours; now the log disk is full. the infrastructure team clears all files to free up space.

I often get into log folder and delete everything under the log folder to save sapces.
In Windows, active logging file is locked, and it cannot be deleted. I am fine with that.
In Linux (Debian 11), the active log file can be deleted. However, I have to restart the service/app. Or depend on rollingInterval, I have to wait a mintue, a hour or a day until the new log file is created. This is inconvenient.


Edited:
Okay I found a combination of commands can delete the the logs that are not opened by other apps in Linux (Debian 11), so I should be fine now.

Test run phase:
find . -maxdepth 1 -name "Apple*" ! -exec fuser {} ; -exec echo {} ;

Actual deletion phase:
find . -maxdepth 1 -name "Apple*" ! -exec fuser {} ; -exec rm {} ;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants