Skip to content
This repository has been archived by the owner on Feb 14, 2018. It is now read-only.

Nested Wikis? #409

Open
WardCunningham opened this issue Mar 16, 2014 · 21 comments
Open

Nested Wikis? #409

WardCunningham opened this issue Mar 16, 2014 · 21 comments

Comments

@WardCunningham
Copy link
Owner

What would be the purpose of nesting one wiki within the pages of another? I tried this yesterday by inserting the entire cave.fed.wiki.org content as a data item in another wiki. To my surprise this worked just fine. The Data plugin written many months ago even scrubbed through the page names.

screen shot 2014-03-16 at 9 39 20 am

The thumbnail says 1x171 to indicate the data is organized as 171 rows by 1 column, the pages of the embedded wiki.

What more can we do with this?

This question is important to me now as I consider refactoring the core javascript and especially its relationship with plugins and storage mediums. Here a plugin on one page has become the storage medium of a whole additional wiki.

What might work:

Imagine that upon viewing the Cave Wiki as Data page its embedded wiki becomes just one more read-only site in your neighborhood. I'm probably only 10 lines of javascript from making this much a reality. Then what?

Of course there are large object issues that we already face with images. If it helps, imagine wiki had an asset manager that allowed it to hold large photos and even movies. How is a wiki of wikis of wikis ... necessarily different?

I'm feeling a need for moving whole sites around from platform service to platform service. I do this now for a few sites using rsync. Does a wiki of wikis make this just another drag-and-drop refactoring?

@paul90
Copy link
Contributor

paul90 commented Mar 17, 2014

Interesting idea

Why limit it to being a read-only copy? Having the wiki data embedded is potentially only a view, it could be stored as pages in a sub-site (which would provide an interesting option of those not wanting to, or unable, to run a farm).

@mkelleyharris
Copy link
Contributor

Powerful idea. Possible tree & leaf relationships. At the very least it
could be a hierarchical indexing strategy. One master wiki that holds
other wikis. It could be a candidate as the default behavior within a farm
of wikis.

On Sun, Mar 16, 2014 at 12:21 PM, Ward Cunningham
notifications@github.comwrote:

What would be the purpose of nesting one wiki within the pages of another?
I tried this yesterday by inserting the entire cave.fed.wiki.org content
as a data item in another wiki. To my surprise this worked just fine. The
Data plugin written many months ago even scrubbed through the page names.

[image: screen shot 2014-03-16 at 9 39 20 am]https://f.cloud.github.com/assets/12127/2431392/87e48d2e-ad2c-11e3-91c2-757f38de118d.png

The thumbnail says 1x171 to indicate the data is organized as 171 rows by
1 column, the pages of the embedded wiki.

What more can we do with this?

This question is important to me now as I consider refactoring the core
javascript and especially its relationship with plugins and storage
mediums. Here a plugin on one page has become the storage medium of a whole
additional wiki.

What might work:

Imagine that upon viewing the Cave Wiki as Data page its embedded wiki
becomes just one more read-only site in your neighborhood. I'm probably
only 10 lines of javascript from making this much a reality. Then what?

Of course there are large object issues that we already face with images.
If it helps, imagine wiki had an asset manager that allowed it to hold
large photos and even movies. How is a wiki of wikis of wikis ...
necessarily different?

I'm feeling a need for moving whole sites around from platform service to
platform service. I do this now for a few sites using rsync. Does a wiki of
wikis make this just another drag-and-drop refactoring?


Reply to this email directly or view it on GitHubhttps://github.com//issues/409
.

@rynomad
Copy link
Contributor

rynomad commented Mar 17, 2014

I've been pondering this very scenario in thinking about wiki revision control:

If we have semantic linking between pages and the story items they contain (including other pages), then one could see having a group of 'collection' pages that contain categorized pages relevant to a subject area... as one or more of these referenced pages change, the updates could bubble up to the parent and propagate to other peers that maintain a version of that same parent page, which means you could effectively subscribe to updates for a page without needing to keep a copy of that page on hand.

I've got a very rough outline of a linked data model that might be relevant to this conversation.

http://rosewiki.org/view/wik-dvcs/view/wik-data-model/view/wik-json-schema

TL;DR: every story Item and page is accessible via URI, page stories consist of an array of story item or page uri's. URI's are versioned on update and are constructed as a combination of 'type', id, publisher id, and timestamp. A major implication of this model is that we would need to rework how links are rendered, as page slugs would no longer be the primary key for looking up a page.

@mkelleyharris
Copy link
Contributor

This general idea may support aggregating multiple wikis of data. e.g. USA
sustainability data could be aggregated from separate states' wikis. This
allows divide and conquer for people's projects.

Also dashboards could aggregate data and subscribe to updates from the leaf
wikis.

Recursive nesting ...

On Mon, Mar 17, 2014 at 2:02 PM, Ryan Bennett notifications@github.comwrote:

I've been pondering this very scenario in thinking about wiki revision
control:

If we have semantic linking between pages and the story items they contain
(including other pages), then one could see having a group of 'collection'
pages that contain categorized pages relevant to a subject area... as one
or more of these referenced pages change, the updates could bubble up to
the parent and propagate to other peers that maintain a version of that
same parent page, which means you could effectively subscribe to updates
for a page without needing to keep a copy of that page on hand.

I've got a very rough outline of a linked data model that might be
relevant to this conversation.

http://rosewiki.org/view/wik-dvcs/view/wik-data-model/view/wik-json-schema

TL;DR: every story Item and page is accessible via URI, page stories
consist of an array of story item or page uri's. URI's are versioned on
update and are constructed as a combination of 'type', id, publisher id,
and timestamp. A major implication of this model is that we would need to
rework how links are rendered, as page slugs would no longer be the primary
key for looking up a page.


Reply to this email directly or view it on GitHubhttps://github.com//issues/409#issuecomment-37856471
.

@StevenBlack
Copy link
Contributor

Some ideas to broaden the scope of discussion:

  • In practice, I find the notion is often flipped: we have all these potentially nestable and compound wikis, how do we present different wiki subsets to different users (access control).
  • Alternately, how can we allow users to select their own wiki aggregations and maintain artefact coherence given the arbitrary namespace composition. (subscription control).

Orthogonal to the above is feature control. Users have mixed privileges that can vary in surprising ways beyond can user X read and write to wiki Y. In practice user X is a role that varies from wiki to wiki. Now add varying wiki-specific peer-review workflow rules, and you're into an interesting set of problems and opportunities.

In my admittedly limited but time-tested experimentation, you can do it cleanly with surprisingly terse logic using binary operations on wiki, publish, subscribe, feature, and state (workflow) integers provided your integer math limitations can handle all the flags you foresee setting and sieving.

It's a fascinating problem space.

@almereyda
Copy link
Contributor

In some discussions we've also come across the situation of the need to fork a set of pages, or even a whole wiki as above, where the first would only represent a subset of the latter.

The ReadWriteWeb / LinkedDataPlatform could hold a key with its WebID implementation of federated identity, xwiki even has it already. But that's all 5-star data future.

I've just had little workflow imagination in my mind, that I'd like to share for discussion:


Preassumptions:

  • Federated Wiki is a federated, multimodal data storage.
    • This data can be explored by the wiki client.
    • Its implicit dimensions are:
      • the journal (top down linear time, refactored. page scope)
      • the story (URI read linear from left to right. interaction scope)
      • the sitemap (a wiki's concrete coordinates within the flat name space)
      • the federation (connected wikis in front (newer) or behind (older) the current one, current implementation encodes this as coloured borders)

Now I've had the idea, how to represent forking a certain set of pages within a workflow:

  • Currently, when we're dropping a link on the factory, only the last page at the end of the URI gets resolved by the server.
  • Couldn't we hack it that way, that the story (i.e. pages opened next to each other) is preserved and clicking on such a link would open all child elements to the right, thus recreating the story?
    • Those pages should already having been forked, if available. If the original source (i.e. another domain in the URI) is not available, the current federation is queried for possible 'second hand', 'best guess' solutions, that should be marked as such.
      • A new drop-story-factory could be hold responsible for that.
      • If applicable, the nested wiki approach seems interesting for that scenario, too.

@almereyda
Copy link
Contributor

Now that Xanadu arrived, I see another use for Nested Wikis.

Sometimes I want to group some paragraphs together, maybe for referencing, moving around together or other purposes. That interface is quite easy to imagine: Connected paragraphs would have a coloured bar on their left side, or another visual connection.

The Nested Wikis data plugin could then hold that data, which paragraphs belong to each other. And I could decide how I relinearize my story/journal/page with the client. Then, in the next step, the Nested Model would also allow to store cached copies of full paragraphs, additional metadata, etc., once we move into the direction of Transclusion.

Nested Wikis The data plugin could therefore be a Data Storage for plugins that need different content types, sourcing from different factories within the Nested Wiki, available for their main factory's summary view.

Inspiration here coming from: Rizzoma, Gingko and Wagn, loosely from Netention [Demo] and Fluid Views [ scroll, switch top-left ].


Edit from Feb 15 15 : Updated naming.

@StevenBlack
Copy link
Contributor

At a meta-level this has much to do with storage granularity, and the mutability of what's fetched.

Here's a great example that may be new and interesting to some.

Did you know... jQuery.load() explicitly supports filtering content. For example, this works great; Only the contents of #container segment of the Ajax response is loaded into the #result DOM element.

$('#result').load('ajax/test.html #container');

This is very handy!

In wiki terms, when we talk about "nesting" we should also include the ability to nest sub-elements because storage granularity should not unduly limit nesting functionality.

I apologize if this is obvious. I'm watching this project from afar, unfortunately. If this is old news, please disregard :)

Edit: not suggesting that this should be handled on the client-side. This should be supported server-side. But in a pinch the client can do this. Here I'm using jQuery as an example of the mutability of fetch, which is the central idea.

@almereyda
Copy link
Contributor

Note to self: Thinking alot about plugins today, I will further invest how the ideas from The Object Network with their Dynamic UI models and Mozilla Brick's Web Components inform this discussion. @bblfish also mentionned something about self-building UIs in Paris.


Update: TangleDown is another candidate for (easy?) inclusion as a factory.


There are some predefined X-Tags to get an impression.


Having said that, it would be nice to also have a federation of versioned logic, in this case the plugins. What else could we think of?

@WardCunningham
Copy link
Owner Author

We did once support a query for a page that had a dataset by name. The idea was that a plugin could look to its left for data and if it didn't find it, then it could ask the server if any page had the data. The Metabolism plugin used this feature.

https://github.com/fedwiki/wiki-gem/blob/master/lib/wiki/server.rb#L155-L167
https://github.com/fedwiki/wiki-plugin-metabolism/blob/master/client/metabolism.coffee#L20-L26

If I had already fetched the whole site, like we have considered at the start of this issue, then it would be a client-side extraction to find and attach the desired dataset.

I wonder what @StevenBlack thinks could be done client-side to parametrize queries and then execute them server-side. The Metabolic Calculator is a query after all.

http://fed.wiki.org/metabolic-equivalent-of-task.html

@StevenBlack
Copy link
Contributor

Forgive this utilitarian tangent...

@WardCunningham have you seen JSONSelect? This works both client- and server-side via .js library and/or a Node.js module.

I've used this extensively. I can't overstate how handy mining arbitrary JSON can be, regardless of the implementation layer. JSONSelect is trivially easy to use too; a masterpiece.

So the question becomes vastly interesting if we know the client has keen powers to refine data beyond what the server supports. Orthogonal to that, it's interesting when both client and server can share the same "selector" language.

So one way to parameterize a query is to pass "selectors" either as direct parameters, or as parameters to a server-side callback (or some server-side post-process) prior to returning the response.

Alternately, the client can fetch data, and processes the response locally using the very same selector.

@bblfish
Copy link

bblfish commented Jun 10, 2014

Since I was summoned, I'll just post a few thoughts on the notion of merging information.

I'd just start with the following point: merging pages ( literals ) and data have very different properties

  • Merging of pages can be done only with diff tools and only if differences are very minimal
  • Merging data is much easier - that is what RDF is designed to do

So when we build applications in LDP we are merging data fetched from different parts of the web. What the user decided to merge or unmerge will depend on his point of view. That is merging of data is subjective, and decides a point of view on the world: which possible world we think we are living in, or which possible world we would like to explore.

Still one can imagine an LDP version of a wiki or of text, where pages publish metadata about which version they were copied from, and what pages are copies of it. If one puts together a notification mechanism ( see eg: Friending on the Social Web ) then one could make it easy to keep such distributed histories of pages. A user could then agree with another diff and incorporate it, or even just point to the other page ( it's a question of trust which way you go ). But here we are not that far from the web as a distributed publishing mechanism.

As pages become more semantic ( data oriented if you want ) it becomes easier to cross reference, and verify logical cohesion a set of information mechanically, or at least to point up potential logical conflicts. That can of course go much further than textual conflicts, and is also another level of engineering altogether.

@WardCunningham
Copy link
Owner Author

Thank you Steven, JSONSelect is just the sort of generalization that I find useful. The Metabolic Calculator is too application specific. This wiki's general plugin architecture, too general. We spend a lot of time "wrangling" information. I think of wrangling as making structures match without doing too much damage.

http://vis.stanford.edu/wrangler/

I think Henry is describing a world where wrangling is automatic. I like the vision but I don't know how to do it. A reference to LDP would help. I found http://en.wikipedia.org/wiki/Label_Distribution_Protocol but think that is something else.

My own pet theory is that there is power in linearizing. The power comes from organizational convenience of a less expressive structure. There are lots of examples here. A favorite of mine is "scrubbing" with the mouse over a sequential dataset. Scrubbing works when there is one primary dimension of aggregation and that the aggregated elements have some similarity. I've linearized pages, both the story and the journal. And the lineup too (the sequence of pages on the screen or stored somehow we haven't yet figured out).

Wiki is an editor. I made "move" the first and most highly developed edit operation. I imagine this as organizing blobs of json under sentient, motivated, creative control. My thesis, if I have one in this experiment, is that a federation of editors creates a medium, a landscape, within which selfish-blobs thrive.

@bblfish
Copy link

bblfish commented Jun 10, 2014

The main LDP spec is quite solid, though I think one really only needs to learn about the BasicContainer https://dvcs.w3.org/hg/ldpwg/raw-file/tip/ldp.html
The LDP Primer is still work in progress and if you stick again only to the BasicContainer you have the key feature. https://dvcs.w3.org/hg/ldpwg/raw-file/bblfish/ldp-primer/ldp-primer.html
You can see it in action, together with WebAccessControl and WebID+TLS with curl on rww-play https://github.com/read-write-web/rww-play/wiki/Curl-Interactions

I'll soon put online a new rww-play server which will show how one can use this for distributed Social Networks. It's quite complimentary to wikis: just an additional tool in the helping re-decentralise the web.

@almereyda
Copy link
Contributor

I would like to see Nested Wikis as kinds of branches. That might again involve thinking about a pseudo three dimensional interface metaphor.

dubbing

  • For me, the name space each wiki opens is part of an epistemological graph which is browsable and crying for inference with others. Not that we still try to produce a neutral point-of-view, but instead can show differences over time of any given name.
  • Once we start creating content on wikis as a comment to previous wikis, we want to have stable links, because we will rely on other's wikis content. Nesting several wikis, i.e. with help of the data plugin, can be one way to secure the availability of side-strings of our conversation.
    It should be possible to keep a shadow copy of a whole domain. yala.fed.wiki.org could then be accessed as usual by inserting it next to the URL slug of a given page, but would be preferred locally, if a shadow copy exists (i.e. a data paragraph that specifies the originating domain of its nested wiki). They are not forks in the regular sense!. The client could still show updates on the remote domain

returning


_Edit_ 15 02 15: Formulated sentences out of the former stub.

@almereyda
Copy link
Contributor

Rereading this conversation, some comments emerged.


@WardCunningham wrote
16 Mar 14 I'm feeling a need for moving whole sites around from platform service to platform service. I do this now for a few sites using rsync. Does a wiki of wikis make this just another drag-and-drop refactoring?

For two production wikifarms I'm using private git repositories for this task. Including occasional string replacements in newly checked-out branches, if domains don't fit.

There may be other git-based storage engines worth the elaboration.


@rynomad wrote
17 Mar 14 I've got a very rough outline of a linked data model that might be relevant to this conversation.

http://rosewiki.org/view/wik-dvcs/view/wik-data-model/view/wik-json-schema

This URI remains a perfect example of why we need more fine-grained forking of pages / stories / wikis. It only became available again after I persuaded @rynomad to release its data folder and republished it on http://wiki-allmende.rhcloud.com.
Because this is working material for me, which I reused in the last-but-one line of Support the DAT ecosystem for example.


@StevenBlack wrote
18 Mar 14how can we allow users to select their own wiki aggregations and maintain artefact coherence given the arbitrary namespace composition

It is especially the term namespace composition which strikes me here, why the fold of differently aligned namespaces becomes the geometric function we apply on the topologic relations implied by a wiki's sitemap and linkmap.

9 Jun 14 not suggesting that this should be handled on the client-side. This should be supported server-side. But in a pinch the client can do this.

@WardCunningham wrote

14 Jun 14 This project was founded on the vision of a proliferation of servers exchanging and caching pages on our behalf. I describe this in this repo's ReadMe three years ago.

21f4e75#commitcomment-6670041

With this comment left this morning I admit that this much hasn't worked out. I have high hopes for IPv6 and even more for NDN and similar overlay networks. However, I can't see how these technologies become anything more than neighborhoods in a more comprehensive federation.

Maybe one day wiki-client will share components that can be reused by wiki-node-server, i.e. the push and pull logic, so in fact we don't care where a client of a remote wiki API is located; inside a user's browser or running as a sub-process of any arbitrary wikifarm.

@bblfish Do I read LDP right for the instance that WebID delegation offers help here to authorize such requests, wherever which client is acting upon whose behalf?


@almereyda wrote (me! ;))
9 Jun 14 it would be nice to also have a federation of versioned logic, in this case the plugins

@WardCunningham wrote
14 Jun 14 We made the choice last summer to use npm as the preferred plugin registry

Two aspects of package-managing plugins come to my mind:

  • There is substack/hyperboot which calls itself an offline webapp bootloader. But what it does is signing versioned releases of a certain piece of software. A package repository built on top of an exposed REST API that gives access to such releases may help in getting independent from NPM. Ideally built with wiki itself.
  • As it comes into play, lazy loading of wiki-plugin-* packages via NPM within the browser could come in handy, if no such parallel package repository and no wiki in the current search scope offered a certain module for cross-referencing or copying to the local client. Is that even possible?

@bblfish wrote
10 Jun 14 But here we are not that far from the web as a distributed publishing mechanism.

@WardCunningham wrote
10 Jun 14 Wiki is an editor. ... My thesis, if I have one in this experiment, is that a federation of editors creates a medium, a landscape, within which selfish-blobs thrive.

Here you write about the Thought Soup again, if I don't misunderstand.

As long as we aggree that the web platform itself is the modulary compound we're commonly designing, then our scope switches quickly from implementing open and standardized ecosystems to educate and go by example in self-dogfooding of already existing federation interfaces.


@bblfish wrote
10 Jun 14 I'll soon put online a new rww-play server which will show how one can use this for distributed Social Networks.

FYI, @allmende has taken custody of this task to be released within the @ecobytes infrastructure.

@almereyda
Copy link
Contributor

Did anyone interested in Xanadu recently play with Bruno Latour's Inquiry into Modes of Existence?

@WardCunningham
Copy link
Owner Author

Latour has been mentioned in good light in our happenings so I took some time to investigate. I examined these sources. I did not join the inquiry.

http://modesofexistence.org/
http://modesofexistence.org/tuto-contribution/
https://github.com/medialab/aime-core
http://www.sciencetechnologystudies.org/system/files/v27n1BookReview1.pdf

The interactive application appears to be a hypertext organized into three domains were one includes user-generated content. Links carry an attribute, the mode, of which there are provisionally fifteen. I would describe this as a "schema" imposed on the hypertext, presumably to the benefit of the project, the inquiry. Latour has been defensive of this schema arguing that though it might be a "system" it is one that has surfaced through years of work, not one that is preconceived.

A new implementation is mentioned in blog posts. Aime-core has three or four contributors working this year in angular, node and Neo4j. This all seems fit for purpose but is neither distributed or designed to outlive the project.

@almereyda
Copy link
Contributor

Unfortunately their blog indicates the premature end of the software development.

@WardCunningham
Copy link
Owner Author

I poked around at the leftovers. It remains much as I remembered: promising but hard to find the meat.

@almereyda
Copy link
Contributor

@rynomad's former rosewiki.org is again rehosted, since a recent shut down of the OpenShift v2.0 free tier, at http://rosewiki.federated.wiki/view/ryan-bennett/view/recent-changes

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants