Archived Web Content

in #content β€’ 6 months ago

🌐 User:Dan - Information Rating System Wiki


More actions How to deal with predicates whose truth changes over time Votes on
truth vs votes on decision-making We will want some way to guide new users
through expressing "why" they believe X to be true or false, so that their
explanation is a) easily understood and analyzable, and b) ratable.
As part of this process, the user may want to link to evidence supporting their
vote.
As a side note, a thorough user may desire to add both evidence supporting and
rejecting X.

When adding evidence, it is important to construct one or more impact arguments
that explains why the evidence is relevant to X.
User's will vote for/against such impact arguments.
Sub-arguments can also be created on an impact predicate to argue for/against
the impact predicate's truth.

At first thought, there's several ways impact arguments can be structured: 1.
Impact argument that evidence argues for predicate. 2.
Impact argument that evidence argues against predicate. 3.

Impact argument that evidence doesn't impact predicate.
In the case of for/against, the evidence could have varying "amounts" of impact,
how can that be quanitified? Impacts with a quantitative impact are a bit
troublesome, because then we're not voting on a simple true/false predicate (or
if we are, we're voting on a large number of similar ones with different
quantities, which doesn't seem practical).

Users can directly rate the believability of each piece of related evidence.
If the rating for the evidence is low, the importance of the impact arguments
clearly won't matter much.
Directly rating the believability of documentary evidence will often be
difficult, as documents may not clearly express the predicates, and they will
generally contain many predicates, some true, some false.

One way to resolve this would be to add a link to the document, but extract out
the key predicates required to support or oppose an argument and rate those
predicates individually.
In this case, the document is just a "source" from which actual evidence and
impact predicates are being constructed.
This will likely involve creating a higher order predicate like If A and B and not C, then predicate X is true. which would also need to be rated (here user's
would be rating the validity of this higher order predicate argument itself, not
A, B, or C.

Note in this case that several such higher order predicates could be created
from the same document, with each creator arguing for a differing interpretation
of the source document's meaning in relationship to X.
In addition to providing evidence, users should be able to describe how they
obtained knowledge about a predicate's truth or the evidence for it.
For example: 1.
'I' read 'predicateX' asserted from a source I trust: link_to_source_material or
unknown 2.
'I' heard 'predicateX' asserted from a source I trust: name_of_source or unknown

'I' personally experienced 'predicateX': when/context 4.
'I' personally deduced 'predicateX': when/how 5.
'I' believe 'predicateX' is true/false because of the arguments presented here:
source(s).

Note that 5 is a way someone could vaguely explain why they believe something is
true if they don't want to create their own detailed argument for their
position, or they feel their reasoning should be clear the sources they point
to.
Ideally they would point to a a clearly laid out predicate argument, rather than
documents, so the system should guide them through existing predicate arguments
that seem likely to match their reasoning that they may want to pick from.

This in turn would also likely result in them rating the argument they select as
their reasoning.
The user could make similar predicate assertions for the evidence predicates.
Note that above are also predicates that could be rated by others.

Only I can assert the statement, but others can rate the believability of the
person's statement.
Notes from laadan (inspiration for recording how knowledge was obtained) The
evidence particle - this occurs at the end of statements and indicates the
trustworthiness of the statement.

  • Known to speaker because perceived by speaker, externally or internally -
    Known to speaker because self-evident - Perceived by speaker in a dream -
    Assumed true by speaker because speaker trusts source - Assumed false by speaker
    because speaker distrusts source; if evil intent by the source is also assumed,
    the form is - Imagined or invented by speaker, hypothetical - Used to indicate
    that the speaker states a total lack of knowledge as to the validity of the
    matter Types of statements These distinctions seem like they will be useful in a
    decision-making system.
  • Declarations - Questions - Commands - Requests - Promises - Warnings Ideas for
    Debate software prototype Meta-data for all types of predicates Text objects:
    adding non-predicate text to a debate Text objects are any text that isn't
    expressible as a simple predicate.

Examples of a text object would be an essay, a resume, etc.
Text objects cannot be directly rated, but predicates can be attached to a text
object to rate it in various ways (e.g.
"This essay is written very clearly.").
It would be useful to be able to "anchor" a predicate to particular locations in
a text object (a word or phrase, for example).

Blobs: adding binary objects to a debate Blobs are binary objects like pictures,
spreadsheets, word processor documents, etc.
Blobs can be to associate images or any non-purely textual material to a
predicate.
Similar to text objects, blobs can be linked to a predicate, and predicates can
be linked to a blob.

It would also be useful to be able to "anchor" a predicate to particular
locations in a blob, but such anchors will require knowledge of the "type" of
the blob so that some appropriate coordinate system can be used.
Documents (files): text objects and blobs In practice, we can probably classify
both "text objects" and "blobs" as a single type of thing: a document file, and
then have special handling for anchoring of each document type which the
software "knows" about.

As a minimal set of supported document types, we should support plain text
objects as known types that we can anchor to interior portions of.
Anchors should be hierarchical, enabling anchoring to a portion of an image
inside a word document, for example.
Tags: adding tags to predicates - One or more domain topic tags can be assigned
to a predicate.

Whenever a tag is assigned to a predicate, this generates a first-class
predicate that is vote-able to determine the accuracy of the tag.
E.g.
if predicate X is tagged with the "biology" tag, then a predicate is
auto-generated that states: "predicate X is related to biology").
Tags themselves are not actual predicates, however, they are more like a
grouping mechanism for a set of predicates.

Tag names should probably be at most two words long.

  • We probably want some threshold number of votes on the tags before they get
    used by filters and voting algorithms in a community system.
    But we also need to be sure that these tags are "seen" so they can be voted on.
  • Topics themselves will need rating, so there's probably another predicate
    associated with each tag like "I find X a useful tag for filtering".

Linking predicates (and objects) - How should we categorize links? Here's some
of the "types" of links I think I've described in this writeup: 1) links between
rewordings of a predicate, 2) links between an argument and the pro/con
sub-arguments, 3) links between a policy and debates related to the policy, 4)
Arguments and the tags (tag predicates) that apply to them, 5) links between
predicates and objects.

  • I think basically all the links we establish between predicates in this system
    should be bidirectional, but assuming we're using a database for the links I
    guess we have that already.

But even though they should be bidirectionally travelable, the links often do
have a "direction" of a sort, often being one where there is an initial
predicate that "inspires" the other predicate.
For example, a reworded predicate inspired by another predicate, a debate
sparked by a policy proposal, an image added to support an argument versus a
predicate added to rate the image, etc.
Data editing For the most part, we probably want to maintain an immutable
history of edits to the database, so rather than editing data we may want to use
a copy-on-write style of editing.

Data model for policy votes - In a community rating system, members vote on a
"community-binding" predicate to undertake or not undertake a specific policy.
A policy describes a set of actions to be performed.
These actions can be either very high-level and vague, or very concrete actions.

  • One or more debates will likely be created to discuss issues of competing
    policies being voted on by the community.

We need a mechanism for linking policy predicates to debate predicates.
Data model for debates - Each debate argument is a predicate.
And similarly, any predicate can be debated.

  • Sub-arguments can be linked to a parent argument in some manner that allows
    both an understanding of and a calculatable impact of the sub-argument on the
    parent argument.

WHAT FORM? - No predicate is ever edited, but new predicates can be reworded
with a reworded link to old ones.
A rewording link isn't strictly necessary, but these may allow for a better
understanding of how a debate has evolved over time.

  • If a predicate is reworded, what mechanism(s) could be used to attach old
    sub-arguments to the new predicate (some may no longer even apply and certainly
    their impacts could change as a result, but likely most will still apply)?

It will also be useful to know which rewording is currently most popular among
users, this can be done by analyzing a recent history of different types of
activity associated with each debate.
Example debate Debate predicate: It is better to have 3 people make decisions
for a group than have just one of the three make the decisions.
Arg1: It can take longer for 3 people to agree on a decision than one person.

Arg2: The sum total knowledge available between 3 people is larger.
Arg3: Two people may "collude" to vote for each other's personal needs, and it
is less obvious than if one person is voting just for their own personal needs.
Arg4: More time and effort is consumed if three people have to spend time
thinking about and voting on a decision.

This is especially wasteful if the decision making is simple and obvious.
Blob: Someone attached a powerpoint slide with graphs about experiments
performed solving math problems with varying numbers of solvers.
Arg1 and Arg4 lead to reformulations to clarify that issue is about non-trivial
decisions and that voting will be used rather than 100% consensus required.

Reworded debate topic: It is better to have 3 people vote on making non-trivial
decisions for a group rather than just have one of the three people making the
decision.
This also leads to a reworded arg4 (simple and obvious part gets dropped since
debate is about non-obvious decisions now): More time and effort is consumed if
three people have to spend time thinking about and voting on a decision.
Arg1 could arguably be considered a pro or con argument, since it could be
argued than taking longer to decide a decision leads to better outcomes (yet
another debate, now on the "impact" of the argument).

So how should we model the "impact" mathematically? Maybe it is some
positive/negative scale (where positive impact values add to the computed rating
for the parent predicate and negative impact values subtract from it)? Note that
this could result in a case where a sub-predicate is considered to have little
impact, even though two "sides" think it has a lot of impact, just in opposite
directions.

But this seems like a reasonable outcome.
Filtering mechanisms for predicates Note: these filtering ideas apply both to
filtering policy predicates and debate predicates.
Filtering should be done by some rating formula that results in a ranked
ordering of the predicates to view, with the highest ranked at the most visible
position.

If predicates are paginated in this ranking, then there probably is no need for
an absolute filtering out of any predicate, but we could also include some kind
of hard limit (e.g.
user may only want to see 10 pages of predicates or user may not want to see any
predicates below some threshold rating).
A user could have multiple rating formulas that they may want to choose from,
depending on what they are interested in seeing at a given time.

For example, if a user wants to simply rate predicates in one of their domains
of interest, they might select a rating formula that ranks high that type of
predicate.
Or if they want to debate a topic in near real-time, they could select a rating
formula that ranks actively debated predicates.

  • Users can filter based on domain tags they either want to see or don't want to
    see based on voting on the accuracy of the tag.
  • Users can vote directly on the importance of people seeing a predicate (not a
    good-bad value judgment, just voting it is an important topic that people should
    see).
  • In cases where other people make their filters available, people can filter
    based on filters used by one or more other users to avoid having to create
    detailed filters of their own.
  • Users can filter based on "linked" predicates (for example, to find or exclude
    all re-worded forms of a debate predicate).

In such case, we might want some way to figure out which of the re-wordings is
the best (which might not be the most active since it may be a new re-wording,
so probably need some more thought on how this should be done).

  • Individuals can filter based one or more rating systems (e.g.
    a specific CRS or the user's personal SRC).
    There are also a number of options for how to select and combine values from
    multiple rating systems over the set of all predicates to determine a ranking.
  • Since it is probably desirable that filters are extremely personalized,
    perhaps it is better to use a filter rating algorithm where the user's personal
    voting if specified has more dominance than the ratings of others.

For example, if the user has directly rated a topic, that rating would be the
one used for the topic, instead of the aggregated rating (unrated ones would
still use the calculated rating, so "suggested topics" would still be findable
from the user's network).

  • A filter rating could be specific to a specific tagging, or even to a
    particular debate or policy.
    For example, debates on politics could be filtered to ignore arguments about
    dead politicians.

This perhaps could be done via a combination of a tag plus a key word search.
Implementation issues Ratings can be very dynamic.
If predicates are stored in an SQL database, for example, does a user's software
just periodically update the predicates with ratings from their rating system,
then generate a filtering query?

Or are the raw predicates fetched from the database, then ratings fetched to use
for ranking? If the latter, seems like there would need to be filtering on which
predicates get stored in the database originally.
So at any given time, the database would only contain predicates that met some
minimal rating formula threshold at the time it was last reported to the user's
node.

But this brings up the question of how predicates arrive at the user's node, a
question which also depends on the topology of the network.
How do predicates get added to a user's database? For fully connected nodes
(e.g.
initial community systems will fall into this category), each node will need to
make a decision about every predicate that gets created (in an extreme case this
could just be blocking predicates from peers that are particularly bad at
generating predicates).

In a subjective rating system, a node would have the option to not pass on
predicates to other peers if they don't consider them interesting/desirable,
resulting in an automatic form of filtering.
There's also the issue of whether to push or pull predicates, or some
combination of both.
Both of these options allow for filtering at the network level: a push could
explicitly ask for predicates with a given tag threshold on the peer it is
querying or it could register with the peer that it only wants to receive pushes
for predicates above the tag threshold.

Solution for now For now, I suppose we should assume a relatively small but
fully connected model, where ratings changes are constantly pushed by peers
(similar to Lem's prototype) and we update the ratings in the database.
If we use a copy-on-write methodology to allow for analysis of historical
changes, then we would timestamp all the changes to a rating as reported by each
peer I suppose.
In a community rating system, each peer would just directly report its own
ratings "vote" change to one or more central databases that would then generate
aggregated ratings.

Prototype UI For now, I propose we should keep the prototype UI as simple as
possible so we can quickly create and deploy something we can test in the real
world and primarily use paginated lists displayed on each page.
I'm assuming a web browser-based UI (e.g. written in Typescript).
Here's a list of UI areas I can think of (not all would need to be created in
the prototype): - A page to define different types of predicate rating filters.

I think we should skip this page in the prototype and just have some "default"
rating filters, for example one for filtering debate topics based on a direct
rating of the topic's importance, one for filtering sub arguments within a
debate based on their impact, etc.
because there's a lot of different options on how we can filter and it shouldn't
be a big issue when dealing with a small number of reasonable people
participating in the prototype testing.

Even when we open it to some public testing, probably a simple muting mechanism
for spammers is the primary need at the beginning, along with basic filtering
based on tags.
Filtering will become more important the more active the userbase becomes.

  • A page (FILTERED DEBATE LIST) where we could see debates/policies ranked and
    paginated based on a dropdown at the top of the page where the user can select
    from available rating filters.

From this page, user can select any debate/policy to navigate to.

  • A page for viewing a single specific debate predicate containing (DEBATE
    PAGE): - the wording of the predicate - if reworded forms of the debate exist,
    links to reworded forms of the debate predicate (paginated and ranked by a
    rewording filter).
  • immediate sub arguments (paginated and ranked by an argument filter such as
    rated "impact").

Sub arguments are also links to a debate on the sub argument itself, allowing
for navigation down the argument tree.

  • links to other debates that this is a sub-argument of, allowing navigating to
    any debate depending on this argument.
  • A set of navigation path links at the top of the page so that user can
    navigate back to any point in the path that led him to the current predicate.
    This would need some kind of squeezing if it gets too long and we would need
    some way to assign "short names" for links too.

This is probably too much work for now and maybe we can just rely on browser
navigation for this.
Every displayed predicate should have the following info: - the wording of the
predicate - the user's own rating (starts as "Vote" or maybe just "?", clickable
to assign a rating) - A direct rating of the predicates from a ratings system
chosen by the user.
This could be clickable to change which rating system is used based on available
ratings systems/algorithms.

  • a computed rating based on its sub arguments (using whatever algorithm the
    user selected for computing this rating).

This could be clickable to change the rating algorithm.

  • a button to reword the predicate.
    This would open a new predicate page with the wording of the old predicate as a
    starting point.
  • a button to add a sub-argument - Optional: we could show the "filter rating"
    computed by whatever filter is allowing the predicate to be displayed (this
    wouldn't apply when someone directly clicks on a predicate, just when it is in a
    list of predicates and was determined by such a filter).

Implementation Issues: - How should we limit the size of predicates for
reasonable viewing? Maybe just a simple character limit (e.g.
160 characters and allow for a maximum of 80 characters per line and 2 lines)?
Another option would be some kind of "squeezing" of predicate text when it is
over a specific length (this would discourage creating long predicates without
being an absolute prohibition).

Both methods could be employed in tandem, with some "maximum allowed length" and
a lower "squeeze length", with the max length a database setting and the squeeze
length just a UI setting.

  • Need to avoid cycles when computing a rating based on sub-arguments Other
    potential features - search for predicates matching keywords - search for
    sub-argument predicates matching keywords Page Mockups For the initial
    prototype, I propose we start with just a few pages: debate list, debate page,
    user list, and a user page.

We need to render the following types of objects: predicates, tags, documents,
and users.
To simplify rendering, we will have standard widgets for objects that can appear
on different pages, but the widgets will have options for different renderings
depending on the context to indicate which actions are currently possible on the
object.
For example, on virtually any page, you can vote on a predicate.

But you can't add sub arguments to a predicate unless you are on that
predicate's "debate" page.
UI Element Notes: [] indicates a button.
Buttons will typically open a dialog for input or a dropdown list to select
from.

Each predicate can have a list of tags associated with it.
This list is shown on the predicate's page (e.g. the Debate page).
Each tag is shown with a percentage relevancy (which user's can click to vote
on).
[Add Arg] Opens dialog to add a sub-argument to a predicate.

This can be a new predicate or an existing one (so the dialog needs both search
and creation functions).
[Add Tag] Opens dialog to add related tags to the predicate.
[Creator: Dan] Click on this button to go to the activity page for the creator
of the predicate.
[Arguer: Dan] Click on this button to go to the activity page of the person who
linked this argument to the predicate.
[Vote Truth: NULL] This button would initially be displayed as [Vote Truth:
NULL].

Once a user clicks on it and votes (just open a dialog that accepts percentage
values with two decimal places for now, between 0.01 and 99.99), it would
display also display user's current voted value instead of NULL (or some similar
text to indicate no vote has been cast).
For example, [Vote Truth: 99.99%].
Also, need a way for a user to remove their vote, for now maybe remove their
vote if they delete everything in the edit box.
[Vote Impact: NULL] This button would basically work the same as the [Vote
Truth] button.
[FilterDropDown] is a dropdown list showing default predicate ratings (probably
one for "all tags" and then one for each tag, order alphabetically for now).

There's an implication here that predicate rating filters have "names".
Right now I use this control for essentially every "list" in the UI.
[Page Size: 20] This sets the page size of the associated list.
Different lists will have different "default" sizes.

User clicks this button to change the page size of the list, maybe allow values
between 1-200? PAGINATION BAR: a standard bar at the bottom of a list to allow
navigation through pages in the list.
Almost every list will have a pagination bar.

For now, the CRS/SRS ratings are just displayed as rating type and percentage
(and the prototype would probably just show a community rating since that's
probably all we'll have for the first prototype).
Eventually these could be dropdown lists so that users could select which
ratings they would like to see.
DEBATE LIST page (very simple page) Debates [FilterDropDown] [PAGE SIZE:
defaults to 20 per page]: - Dogs are better than cats.

CRS (20 votes): 99% SRS: 75% [Creator] [Reword] [Vote Truth] - Ice cream is
healthy.
CRS (20 votes): 99% SRS: 75% [Creator] [Reword] [Vote Truth] - Table tennis is
the best sport.
CRS (20 votes): 99% SRS: 75% [Creator] [Reword] [Vote Truth] - ...

PAGINATION BAR DEBATE page (4 sections: debate predicate, rewordings, sub
arguments, super arguments) Debate: It is better to have 3 people make decisions
for a group than have just one of the three make the decisions.
CRS (20 votes): 99% SRS: 75% [Creator] [Reword] [Vote Truth] [Add Tag] [Add Arg]
[Attach File] Tag List [LIST SIZE: defaults to 5]: [TagA: 99% Tagger: Dan]
[TagB: 95% Tagger: Dan] [tagC: 50% Tagger: Dan] Rewordings [FilterDropDown]
[PAGE SIZE: defaults to max of 3 per page]: - It is better to have 3 people vote
on making non-trivial decisions for a group rather than just have one of the
three people making the decision.

CRS (20 votes): 99% SRS: 75% [Creator] [Reword] [Vote Truth] [Add Arg]
PAGINATION BAR Arguments ranked by total impact (impact * rated truth)
[FilterDropDown] [PAGE SIZE: defaults to 10 per page]: - The sum total knowledge
available between 3 people is large than just one of them.
CRS (20 votes): 99% SRS: 75% [Creator: Dan] [Arguer: Dan] [Reword] [Vote Truth]
[Vote Impact] [Add Arg] - More time and effort is consumed if three people have
to spend time thinking about and voting on a decision.

This is especially wasteful if the decision making is simple and obvious.
CRS (20 votes): 99% SRS: 75% [Creator: Dan] [Arguer: Dan] [Reword] [Vote Truth]
[Vote Impact] [Add Arg] - It can take longer for 3 people to agree on a decision
than one person.
CRS (20 votes): 99% SRS: 75% [Creator: Dan] [Arguer: Dan] [Vote Truth] [Vote
Impact] [Add Arg] - Two people may "collude" to vote for each other's personal
needs, and it is less obvious than if one person is voting just for their own
personal needs.

CRS (20 votes): 99% SRS: 75% [Creator: Dan] [Arguer: Dan] [Reword] [Vote Truth]
[Vote Impact] [Add Arg] PAGINATION BAR Attached Files [FilterDropDown] [PAGE
SIZE: defaults to 5 per page] - [Solve times and correctness of math solutions
with multiple solvers] Debates depending on this argument [FilterDropDown] [PAGE
SIZE: defaults to 5 per page]: - The President of the US has too much power.
CRS (20 votes): 99% SRS: 75% [Creator: Dan] [Arguer: Dan] [Reword] [Vote Truth]
[Vote Impact] [Add Arg] - Dan should be replaced by Donna, Eric, and Pete as
manager of SynaptiCAD.

CRS (20 votes): 99% SRS: 75% [Creator: Dan] [Arguer: Dan] [Reword] [Vote Truth]
[Vote Impact] [Add Arg] PAGINATION BAR USER page (one for each user) User Name:
Alice Activity Summary [DropDownOfTimePeriods: defaults to Last Month]: Rewords:
5 Arguments: 20 Impact Votes: 40 Truth Votes: 200 Recent Activity [PAGE SIZE:
defaults to 20 per page]: Debate Filter: [DropDownFilter] Action Filter:
[Rewords][Arguments][Impact Votes][Truth Votes] Tag Filter:
[DropDownToSelectATag] - Voted 99% on TagA for "It is better to have
3 people make decisions for a group than have just one of the three make the
decisions.

  • Voted 99% on truth of "It is better to have 3 people make
    decisions for a group than have just one of the three make the decisions." -
    Created rewording of X to: "It is better to have 3 people vote on
    making non-trivial decisions for a group rather than just have one of the three
    people making the decision." - Voted 99% on impact of "The sum total
    knowledge available between 3 people is larger." on "It is better to have 3
    people make decisions for a group than have just one of the three make the
    decisions." PAGINATION BAR PROXIED VOTING POWER Note: In a community rating
    system, we can also display someone's accumulated proxied voting weight on
    tags).

Ordered by "All Tags" followed by the highest proxied weight on each tag.
[All Tags: +10%] [TagB: +200%] [TagA: +50%] USER LIST page (filterable list of
all users) Users (debate filtered) [PAGE SIZE: defaults to 40 per page]:
[DropDownFilter] - [Alice] last active ...

  • [Bob] last active ... - [Carol] last active ...
    PAGINATION BAR //Users can also include "bots" that auto-generate some actions.
    //For example, a "pdf bot" could be added to the system to parse
    previously-unparseable pdf docs into sub documents.
    //Or a bot could be be used to automatically assign tags to a predicate or
    document.
    users id : integer not null creation_time : timestamp not null username : char
    not null firstname : text lastname/surname : text url : text email : text
    email_alert_frequency : time_interval //Almost all data in the database is
    created via user and bot actions.

The actions view tracks how the data evolves over time.
actions (for now just make a view of the associated tables) creation_time :
timestamp creator_id : integer not null type : [created_tag, created_predicate,
created_document, created_predicate_tag_link, created predicate_document_link,
created_document_predicate_link, created_argument_link, voted_on_predicate,
delegate_weight, follow_predicate, follow_user, voting_result] details_id :
integer not null that references the specific "type" table (is there some better
way to do this?) //The predicates table stores "raw" predicates, debates, and
policies (debates with a decision_time) predicates id : integer not null
creation_time : timestamp not null creator_id : integer not null decision_time :
timestamp body : char not null aggregated_rating details (computed from ratings
table and selected rating algorithm each time their is a vote) aggregated_rating
: money not null number_of_ratings : integer not null weight_of_ratings :
integer not null evalution_time : timestamp not null //Tracks historical
rating/voting results for a predicate so that they don't need to be recomputed
voting_results id : integer not null creation_time : timestamp not null
creator_id : integer not null //this user will be a bot predicate_id : integer
not null aggregated_rating : money not null number_of_ratings : integer not null
weight_of_ratings : integer not null //Tags primarily exist to represent
information domains.
//Tags can be attached to a predicate to categorize it based on applicable
domains.
tags id : integer not null creation_time :timestamp not null creator_id: integer
not null name: char not null useful_predicate_id: integer not null //Documents
are used to attach evidence and related information to predicates documents id :
integer not null creation_time :timestamp not null creator_id : integer not null
displayname : char document_type_id : integer not null text_id : integer body :
binary data (unique) text : (text search indexes on this field) If a document is
parseable, sub documents get created automatically when the document is added,
and parent/child links are also created, including multi-level hierarchies
within a document.

Document links can be used as part of an anchor path for a link.
//document-to-document links will normally be created by bots document_links id
: integer not null creation_time :timestamp not null creator_id : integer not
null parent_id : integer not null child_id : integer not null //users create
these to categorize a predicate into information domains predicate_tag_links id
: integer not null creation_time : timestamp not null creator_id : integer not
null predicate_id : integer not null tag_id : integer not null
relvancy_predicate_id : integer not null //for attaching relevant documents to a
predicate predicate_document_links id : integer not null creation_time :
timestamp not null creator_id : integer not null predicate_id : integer not null
document_id : integer not null relevancy_predicate_id : integer not null //I
considered merging the table below with predicate_document_links and just adding
a direction field, but it seems like the columns could vary more with time:
//for attaching descriptive predicates to a document document_predicate_links id
: integer not null creation_time : timestamp not null creator_id : integer not
null predicate_id : integer not null document_id : integer not null
relevancy_predicate_id : integer not null argument_links id : integer not null
creation_time : timestamp not null creator_id : integer not null debate_id :
integer not null argument_id : integer not null impact_predicate_id : integer
not null //Created whenever a user rates a predicate ratings id : integer not
null creation_time : timestamp not null creator_id : integer not null
predicate_id : integer not null rating : money A user can create follows to have
a "favorites" list of predicates and users that is easily accessible from the
user's "follows" page: * Debates I'm following (predicate list) * Users I'm
following (user list) Follows can optionally include an "expiration_time" after
which the follow is effectively canceled.

The last created follow of an object is always the "dominant" expiration_time
for the follow, so an existing follow can also be canceled at any time by adding
a new follow of the object where the expiration_time is set to the past.
A user can either directly create follows or use a bot to dynamically manages
his follows (e.g a bot could observe that a user spent 10 minutes reading
various arguments related to a debate and add a follow time with an expiration
time based on how involved the user seemed to be).

The user's follows are also used to generate "alerts" when activity occurs
related to the followed objects.
Activity alerts fill an "alerts" list with clickable links to activities
attached to the followed predicate or user activity.
Follows can optionally be configured to send an out-of-band alert msg with
activity info (e.g.
an email or text message).
followed_predicates id creation_time : timestamp not null creator_id : integer
not null predicate_id alert_type : enum expiration_time : timestamp //follow is
ignored after this time followed_users id creation_time : timestamp not null
creator_id : integer not null user_id alert_type : enum expiration_time :
timestamp //follow is ignored after this time delegations id : integer not null
creation_time : timestamp not null creator_id : integer not null receiver_id :
integer not null tag_id : integer weight : integer (weight can be negative, in
which case it is a counter vote) Voting algorithm: Inputs: * a list of all tags
on the predicate over the delegation threshold * a list of all the user votes on
the predicate * adjust each voter's weight by the delegations made to them.

In the case where a user has delegated based on multiple tags meeting the
delegation threshold (80% and at least N voters where N defaults to 3) and two
or more of those delegation receivers have voted on the predicate, his proxied
weight would be proportionately shared between them).
If a user votes on a predicate, all his proxied power is canceled for that
predicate.
Objects related to votes: policies work proposal ideas labor requests resource
requests What do ratings filters look like with the above data model? * we want
to filter based on tags * we want to filter based on key word searches
(available for predicates and text objects) * we want to filter based on links *
we want to filter based on creator * we want to filter on activity by specific
creator * we want to filter on activity by a certain number of users within a
specified period of time.

Filter outputs: * predicates * documents * tags * users * actions * action types
Linked filter outputs: * tags linked to a debate (categories) * rewordings
linked to a debate (rewordings) * predicates linked to a debate (arguments) *
documents linked to a debate (evidence) * predicates linked to a document
(meta-data) default argument filters: For now, we can allow users to create
direct queries and "name" them.

Filters are classified according to their output (e.g.
predicate filter, tag filter, document filter, argument filter).
filters id creation_time : timestamp not null creator_id : integer not null type
: [predicates, documents name query Filtering based on viewing history: It would
also be useful for a user to be able to track information they've already seen
in the past (their viewing history), allowing them, for instance, to create a
filter that only shows new documents or arguments linked to a debate.

But even in many community ratings systems, it seems possible most users will
not want their viewing information shared with others.
Even in communities where such information is desired to be shared for some kind
of reward, viewing history could easily be obscured if the user employs a bot
that "pretends" to be the user.
So probably rather than directly storing viewing history in the central
database, the user's UI should maintain a local database related to that user's
viewing history.

For example, it could track previously viewed debates and the associated
arguments that were displayed in the various filters used at that time.
In this algorithm, the UI would request longer lists than it needed, then do a
2nd phase filtering of the data displayed in the UI based on the local
"previously viewed" data (or possibly just "greying" previously viewed data).
The "greying" approach is similar to that used by web browsers, and since the
plan is that most objects viewed will in fact be web links, we can get an
ability like this "for free" without writing any code if we are satisfied with
greying, where the greying only takes place if the user then clicks in to the
link.

But note this is different from the originally described version of this
filtering, where the user only has to view the link versus click on the link in
order for the filter to count it as "read".

Template predicates * Useful for grouping related predicates.

  • At its most basic level, these templates serve as an encoding mechanism for
    "families" of predicates.
  • Mostly designed to allow for voting on multiple alternative predicates (e.g.
    weighted voting). * Template parameters are strictly positional.
  • Argument typing is required because argument type arguments to the template
    are ids to objects in various tables.
  • To form a "predicate instantiation", we connect a template and the template
    arguments (using a row in predicate_instantiations table).
  • Note: there's no effective encoding compression because we cache the rendered
    string in the predicates table.
    //Examples of predicate templates I vote for {user}.
    //here the argument is required to be a user object Argument {predicate} is
    relevant to Debate {predicate}.
    predicate_templates id, creation_time, creator, body 1, now, 7, "User {user} is
    awesome" predicate_instantiations predicate_template_id, argument_tuple,
    predicate_id (unique) 1, , 7 To get a set of predicates for a weighted vote on
    alternatives, create a predicate_template (e.g.
    id=42), then query for all predicates where predicate_template_id = 42.

This query would be used both for voting and for the vote evaluation.
I vote for option1. I vote for option2.


πŸ”’ ARCHIVE VERIFICATION CERTIFICATE
Cryptographic authenticity and integrity verification

Source: View Article | Method: enhanced_session

Status: βœ… Complete | Words: πŸ“Š 6523

Topics: 🏷️ #archive #content

Content Hash: 94c1762a5b0cf6d1c4946c7880185c17fd7cbabfcf79cd3174d950b9d44d708b
HTML Hash: 94c1762a5b0cf6d1c4946c7880185c17fd7cbabfcf79cd3174d950b9d44d708b
Blockchain Hash: 79f53ff7efd912fccde64a24e44b621dee2cff6845ca36b13d77d8eec6a9f241
Server: wiki.peerverity.info | Type: unknown
SSL Cert: verified
Blockchain Proof: 79f53ff7efd912fccde64a24e44b621dee2cff6845ca36b13d77d8eec6a9f241
Blockchain: βœ… Yes | Verified: 2025-06-12T15:07:52.646593
Status: βœ… Cryptographically Verified

Archived for historical preservation βš–οΈ ArcHive Professional