Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instructions not clear #27

Open
vemonet opened this issue Oct 22, 2021 · 3 comments
Open

Instructions not clear #27

vemonet opened this issue Oct 22, 2021 · 3 comments

Comments

@vemonet
Copy link
Contributor

vemonet commented Oct 22, 2021

Hi, we are trying to reuse LSQ to generate logs for SPARQL queries

I noticed that most of your commits (https://github.com/AKSW/LSQ/commits/develop) are just about improving the readme/docs but the documentation provided is surprisingly highly unclear, and a bit out of date

I needed to rewrite straightforward up-to-date instructions to help our collaborators to use it: https://github.com/vemonet/lsq-anal-sparql

You might want to reuse it to update your instructions and document how to run the LSQ process for people who are not developing LSQ (people who usually just want to provide 1 log file and 1 SPARQL endpoint URL to an executable, and get results)

@Aklakan
Copy link
Member

Aklakan commented Oct 22, 2021

Thanks for your efforts in explaining the process of lsq in clarity. I suppose you are fine with it if I eventually I copied (at least partially) your instruction back into the docs? ;)

@Aklakan
Copy link
Member

Aklakan commented Oct 22, 2021

The reason why there is benchmark create and prepare is to provide context for interpreting benchmark results; espacially about the thresholds that were in place (timeouts / result set limits). The generated config files are RDF and can be put into the same dataset / triple store as the benchmark results.
From the same benchmark configuration (via 'benchmark create') it is possible to prepare and execute multiple runs at different points in time.
While results obtained with different configurations may not be directly comparable, those runs prepared from the same settings should typically be.

@vemonet
Copy link
Contributor Author

vemonet commented Oct 26, 2021

Feel free to use as much as you want :)

Yep, for the benchmark theoretically it's obviously better to split the process in multiple steps to make it more modular. And it's also a better solution when your setting up a complete system that takes care of it

But when you just want to run through the benchmark... It's a bit frustrating!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants