[VeBetterDAO proposal] Creating a Transparent, Merit-Based Reputation Layer for the VeBetterDAO Ecosystem

Full version of the proposal in this PDF

Creating a Transparent, Merit-Based Reputation Layer for the VeBetterDAO Ecosystem

Proposal Summary

This proposal introduces a comprehensive upgrade to the current VeBetterDAO dApp profile page by transforming it into a transparent, data-driven, and reputation-based discovery layer for the ecosystem.

The objective is to:

  • Improve transparency across all ecosystem dApps;

  • Provide users with meaningful performance and quality indicators;

  • Incentivise high-quality builders with optimized resources not limited to Token allocations such as exposures and best locations in VeWorld/VeBetter.com;

  • Establish a merit-based ranking and reputation framework;

  • Improve discovery and engagement across VeBetterDAO platforms.

As the ecosystem grows beyond 50+ dApps, the current static profile system is no longer sufficient to support informed user decisions or healthy ecosystem competition.

This proposal introduces four structural layers: Enhanced Basic Information, Transparent Data & Metrics, Ratings, Reviews & Social Feedback, and Reputation, Badges & Scoring System. An overall score for each dApp will be calculated based on the combined performance across these four layers.

Background & Problem Statement

VeBetterDAO has grown rapidly in both the number of dApps and total B3TR distribution. However, the profile infrastructure has remained largely static.

Currently:

  • dApp profile pages are outdated and not enough timely information for users;

  • There is no differentiation between high-performing and low-performing dApps;

  • No user feedback mechanism exists;

  • No reputation layer exists.

As a result:

  • Users cannot easily evaluate which dApps are active or trustworthy;

  • Builders have limited incentive to maintain or improve their profile presence;

  • High-performing dApps are not sufficiently rewarded with visibility;

  • Low-effort or inactive projects appear visually equal to high-quality applications.

This creates an inefficient discovery environment and weakens ecosystem trust signals.

Detailed Specification

We propose transforming the current dApp profile page into a lightweight “Web3 App Store Layer”, combining transparency, metrics, feedback, and merit-based ranking.

The new system consists of four integrated layers:

  1. Enhanced Basic Information;

  2. Transparent Data & Metrics;

  3. Ratings, Reviews & Social Feedback;

  4. Reputation, Badges & Scoring System.

Layer 1: Enhanced Basic Information

Make the profile page the single source of truth for each dApp.

Existing Fields (Retained & Refined)

  • Project Logo;

  • Project Description;

  • Treasury Address;

  • Social Links;

  • Official Website;

  • Member Since (Join Date);

  • Endorsement Status;

  • Distribution Strategy:

    • Details;

    • Rewards Statistics - Total B3TR Distributed, Actions Rewarded, Unique Users;

    • Rewards Distributors.

  • Social Media Updates - formerly titled ‘App Updates’:

    • For example, official communications disseminated via social media platforms (e.g., 𝕏), covering campaign initiatives and community engagement efforts.
  • Preview - formerly titled ‘Screenshots’:

    • where the dApp may present a preview of its primary interface page.

Recommended Additional Fields

  • Founder/Team Background;

  • Support Channel (Telegram/Discord/Email);

  • Tutorial (Video/Docs);

  • ‘What’s New’:

    • pertaining to product updates, dApp owners may submit concise update logs or changelogs (e.g. limited to 300 characters per entry), each of which is timestamped and wallet-signed.
  • Example:

    • Jan 12 - Added referral program;

    • Jan 20 - Fixed reward calculation bug.

  • Roadmap;

  • Partners;

  • Whitepaper.

Incentive: Profile completion remains flexible. However:

  • More completed fields → higher Profile Completeness Score;

  • Higher profile score → better visibility in rankings.

This creates a soft incentive for transparency without enforcing rigid requirements.

Layer 2: Transparent Data & Metrics

The current profile page only displays dApp Allocation information. We propose highlighting key metrics that users value most, with trend indicators (↑ ↓) for intuitive insights, such as:

  • Distribution Performance:

    • The percentage of tokens rewarded to users relative to the total received by the dApp;
  • Activity Metrics:

    • Weekly Active Users (WAU);

    • New Users (Last 7 Days).

Additional data and charts can be accessed via a collapsible Additional Metrics section for deeper exploration:

  • Allocation Transparency:

    • Total Allocation Received;

    • Latest Allocation Received;

    • Average Allocation Received per Round.

  • Distribution Performance:

    • Total Rewards Distributed (%);

    • Rewards Distribution in Last Round (%);

    • Average Distribution Ratio;

  • Activity Metrics:

    • Total B3TR Actions;

    • Total Unique Users.

  • Visual Format:

    • Allocated v.s. Distributed comparison;

    • 7-day activity mini charts.

Rationale

Users want to know:

  • Is this dApp active?

  • Is it actually distributing its allocation?

  • Is it growing?

  • Are real users interacting with it?

Displaying performance metrics directly builds transparency and accountability.

Layer 3: Ratings, Reviews & Social Feedback

Introduce community-driven quality signals. To provide dApps with clearer user feedback and help users gain a more intuitive understanding of the apps, a rating and review system is introduced in this section.

Star Rating (1-5)

  • Only wallets that interacted with the dApp can rate, e.g.

    • the wallet has executed at least 1 action on the dApp contract;

    • the wallet has received B3TR distribution from that dApp.

  • One rating per wallet per dApp:

    • can update anytime;

    • but only the latest rating counts (latest overwrites previous).

Written Reviews

  • Only users (interacted wallets) can write reviews;

  • Min. length requirement (e.g., 20 words / 100 characters);

  • Editable / deletable by author.

Reviews Engagement

  • Thumbs Up/Down on Reviews

    • any wallet can thumbs up/down a review;

    • can update anytime;

    • but only the latest counts.

  • Flag/Report Mechanism

    • Review auto-hidden after X unique reports (e.g. 25).

Anti-Spam Mechanism: All actions (rating, updating, reviewing, voting, flagging):

  • Require wallet signature;

  • Recorded on-chain.

*This significantly reduces spam and bot manipulation.

Layer 4: Reputation, Badges & Scoring System

Establish merit-based recognition and ranking. The reputation and badge system is designed to reward high-quality dApps and highlight top performers.

Badge System

Examples:

  • :graduation_cap: Grantee/Grant Recipient;

  • :trophy: Hackathon Winner;

  • :compass: Navigators Approved - Positive reviews from Navigators are visually highlighted;

  • :locked_with_key: Security Audited - Audited by Audit Partners including VeChain Foundation;

  • :gem_stone: High Distribution Efficiency - Top 10% of dApps by % of Rewards Distributed;

  • :fire: Trending This Week - Top 10% New User Growth;

  • :1st_place_medal: Top 10% Ecosystem dApp - Top Scored dApps.

Additional badges may be introduced in the future, for example, :sparkles:Start dApp of the Month’.

*Depending on the UI design, the number of badges displayed per dApp may be limited. dApps that earn multiple badges can select which ones to feature on their profile page.

Scoring System

The Total Score for each dApp comprises four components, for detailed formulas, please refer to Appendix A:

  • Profile Completeness (10%)

    • Measures the completeness of required profile fields and the recency of profile updates under Layer 1.
  • Activity Score (30%)

    • Measures real user activity and growth over the previous 7 days.

    • Based on:

      • Weekly Active Users;

      • 7-day user growth rate - New Users;

      • No. of actions in the last 7 days.

  • Distribution Efficiency Score (30%)

    • Measures the efficiency of reward distribution over the allocated amount.

    • Based on % of allocated B3TR actually distributed to users.

  • Community Score (30%)

    • Measures user satisfaction and feedback confidence.

    • Based on:

      • Average star rating;

      • No. of ratings (weighted logarithmically);

      • Review engagement (thumbs up).

The raw numeric score will not be publicly displayed. Instead, it will be used to:

  • Rank dApps in discovery;

  • Determine Top 10% ecosystem status;

  • Trigger dynamic reputation badges, etc.

*The score is recalculated automatically every 7 days using only on-chain and indexed ecosystem data.

Discovery Improvements

Currently, dApps are sorted either alphabetically or by their join date, which does not necessarily reflect their performance or quality.

Proposed default: Sort dApps by Total Score, allowing users to immediately see the highest-performing dApps based on objective metrics. This approach provides a more meaningful and performance-driven ranking, helping users discover top-quality dApps efficiently.

Proposed Curated Lists

  • Recommended by Navigators - dApps with ‘:compass: Navigators Approved’ badge;

  • Ecosystem Picks - dApps with ‘:1st_place_medal: Top 10% Ecosystem dApp’ badge;

  • This Week’s Favourites - dApps with ‘:fire: Trending This Week’ badge.

Pages to Update Default Sorting (full details available in the PDF, link at the top)

  • Governance Apps Page:

  • Governance Dashboard (‘Explore Apps’):

  • VeBetter Main Page:

  • VeBetter Ecosystem Apps Page:

  • VeWorld:

This ensures high-performing dApps gain earned visibility.

Why This Matters

This proposal strengthens VeBetterDAO in five critical ways:

  • Transparency: Users can see real metrics and distribution efficiency;

  • Meritocracy: Performance determines visibility, not join date;

  • Accountability: Builders are incentivised to distribute fairly and maintain activity;

  • Community Voice: Ratings and reviews create social trust signals;

  • Sustainable Growth: High-quality dApps are rewarded, low-effort ones lose visibility.

Conclusion

The current dApp profile page is underutilised because it lacks meaningful information, differentiation, and trust signals.

By upgrading it into a transparent, data-driven, reputation-based discovery layer, VeBetterDAO can:

  • Improve user experience;

  • Encourage builder excellence;

  • Strengthen ecosystem credibility;

  • Promote healthy competition;

  • Create long-term sustainable growth.

This proposal lays the foundation for a transparent, merit-based Web3 application marketplace within the VeBetterDAO ecosystem.

Appendix

(refer to the linked PDF at the top of the page for better reading experience)

Appendix A: Score System Formulas

Profile Completeness (10%)

Let:

  • Mtotal = Total no. of mandatory fields;

  • Mcompleted = No. of mandatory fields completed.

  • StaticScore= 100 x Mcompleted/ Mtotal, 0 ≤ StaticScore ≤ 100.

In addition to static completion, profile update recency is also considered:

Product Update Recency

  • DaysSinceProductUpdate = no. of days since last valid ‘What’s New’ entry;

  • ProductRecency =

    • 100%, if DaysSinceProductUpdate ≤ 14
    • 50%, 14 < DaysSinceProductUpdate ≤ 30
    • 0%, if DaysSinceProductUpdate > 30

Social Communication Recency

  • DaysSinceSocialUpdate = no. of days since last valid ‘Social Media Updates’ entry;

  • SocialRecency =

    • 100%, if DaysSinceSocialUpdate ≤ 14
    • 50%, 14 < DaysSinceSocialUpdate ≤ 30
    • 0%, if DaysSinceSocialUpdate > 30

Combined Recency Factor:

  • RecencyFactor= (ProductRecency + SocialRecency) / 2

  • 0 ≤ RecencyFactor ≤ 1.

Then, for each dApp:

  • ProfileScore= StaticScore (0.8+0.2 x RencencyFactor)

  • 0 ≤ ProfileScore ≤ 100.

Interpretation:

  • Static completion remains dominant - 80%;

  • RecencyFactor adjusts up to ±20%;

  • If no updates → profile score reduced slightly;

  • If active updates → full score retained.

Activity Score (30%)

For each dApp:

  • WAU7d = No. of unique wallets that executed at least one valid action within the last 7 days;

  • NewUsers7d = No. of wallets whose first-ever interaction with the dApp occurred within the last 7 days;

  • Actions7d = Total no. of valid rewardable actions executed within the last 7 days.

To reduce dominance of extremely large dApps, natural logarithm is used:

  • WAUscaled = ln (1+WAU7d);

  • NewUsersscaled = ln (1+NewUsers7d);

  • Actionsscaled = ln (1+Actions7d).

Then, each metric is converted into a percentile rank across all eligible dApps:

  • PWAU = PercentileRank( WAUscaled among all eligible dApps);

  • PNewUsers = PercentileRank( NewUsersscaled among all eligible dApps);

  • PActions = PercentileRank( Actionsscaled among all eligible dApps).

*The Percentile Rank of xi is defined as:

  • P(xi) = # { j | xj < xi} / N 100,

  • 0 ≤ P(xi) ≤ 100, where

    • xi = value of metric x for dApp i;

    • xj = value of metric x for another eligible dApp j;

    • N = total no. of eligible dApps;

    • # { j | xj < xi} = no. of eligible dApps whose metric value is strictly lower than xi.

  • To avoid instability when multiple dApps share identical values, the following tie rule applies:

    • If T dApps share the same value xi, then

      • P(xi) = 100 x [ #{ j | xj < xi} + 0.5 x T] / N
    • *This assigns tied values the midpoint percentile among tied entries.

Then, for each dApp:

  • ActivityScore= 50% x PWAU + 30% x PNewUsers + 20% x PActions

  • 0 ≤ ActivityScore ≤ 100.

Distribution Efficiency Score (30%)

For each dApp:

  • Alloc7d = Total B3TR allocated in last 7 days;

  • Dist7d = Total B3TR distributed in the last 7 days.

  • DistributionEfficiency = Dist7dAlloc7d

    • If Alloc7d = 0 → DistributionEfficiency= 0.

    • The value is clamped between 0 and 1, ensuring that it does not distort the efficiency metric:

      • DistributionEfficiency = min (Dist7d / Alloc7d, 1)

      • 1.0 → fully distributed allocation;

      • 0.5 → distributed half;

      • 0 → no distribution.

Then, convert DistributionEfficiency into percentile rank:

  • PDistributionEfficiency = PercentileRank(DistributionEfficiency);

  • Range: 0-100.

Then, for each dApp:

  • DistributionScore = PDistributionEfficiency

  • 0 ≤ DistributionScore ≤ 100.

Community Score (30%)

For each dApp:

  • AvgRating = arithmetic mean of all valid star ratings (range: 1-5);

  • NumRatings = total no. of valid ratings.

    • To prevent manipulation from very small samples:

      • If NumRatings < Rmin → CommunityScore = 0, where Rmin = 20.
  • *Threshold adjustable by governance.

  • HelpfulVotes = total no. of thumbs up across all reviews;

  • UnhelpfulVotes = total no. of thumbs down across all reviews .

To reduce the influence of low rating counts, a confidence factor is applied.

  • ConfidenceFactor = 1 - exp(NumRatings / K), where

    • exp = exponential function (Euler’s constant);

    • K = smoothing constant (e.g. 100).

    • When NumRatings is small → ConfidenceFactor is small;

    • As NumRatings increases → ConfidenceFactor approaches 1.

    • The function grows smoothly and asymptotically.

Define net helpful ratio and engagement factor of Reviews:

  • HelpfulRatio = HelpfulVotes / (HelpfulVotes + UnhelpfulVotes),

  • If no votes exist, then HelpfulRatio = 0.5.

  • EngagementFactor= 0.5 + 0.5 x HelpfulRatio, where

    • 0.5 ≤ EngagementFactor ≤ 1.
  • This ensures:

    • Reviews that are widely marked helpful → small boost;

    • Reviews widely marked unhelpful → slight penalty;

    • No engagement → neutral.

Then,

  • AdjustedRating = AvgRating x ConfidenceFactor x EngagementFactor

    • 0 ≤ AdjustedRating ≤ 5.

The CommunityScore is the percentile rank of AdjustedRating across eligible dApps.

  • PAdjustedRating = PercentileRank(AdjustedRating);

  • CommunityScore = PAdjustedRating

  • 0 ≤ CommunityScore ≤ 100

  • Interpretation:

    • High AvgRating + High NumRatings → High CommunityScore;

    • High AvgRating + Low NumRatings → Moderate CommunityScore;

    • Low AvgRating + High NumRatings → Low CommunityScore.

Total Score

TotalScore = 10% x ProfileScore + 30% x ActivityScore + 30% x DistributionScore + 30% x CommunityScore.

2 Likes

I love this idea. This is exactly the kind of upgrades the DAO needs to highlight the best-performing dApps.

I propose to include a ‘minimum wallet age’ as an additional factor in order to be able to leave a community review, including leaving a thumbs up or thumbs down to comments.

I think it would also be good to be able to see a user’s review history, or to have a type of ‘reviewer credibility score’ based on the user’s activity within the DAO at the time of review, GM NFT status, etc. This allows for more visibility and credibility to reviews from active users, and reduces visibility for potentially false reviews.

I also love the idea of the badges. This helps to clearly identify dApps worthy of recognition. One consideration regarding the security audit badge: while it’s great that audits would be conducted internally, it could raise some concerns about potential bias or favouritism. Having them conducted by an independent third party only, instead of internally, could strengthen the perception of neutrality.

A compromise to this would be to have two different badges, one for VeChain audits, and one for third party audits. It would be a huge benefit for a dApp to have both badges. A VeChain audit badge would show alignment with ecosystem standards, while a third-party audit badge would reinforce neutrality and decentralization. Together, they would provide complementary signals of trust and credibility.

Great proposal, and I’m looking forward to seeing how this one plays out!

3 Likes

Sounds like a brilliant upgrade. The biggest sticking point for VeBetterDao from my perspective is ease of use. Hopefully this goes some way to enhancing that

2 Likes

I am adding a summary of what was discussed on Discord / Telegram @GoldenShamrock @GreenAmbassador @litter3 @Morb

User Reviews & GM NFT Gating

  • Rover and BreakingBallz both advocated restricting written reviews to GM NFT holders only, arguing it adds utility to GM NFTs, creates an earned right to give feedback, and helps prevent review spam.
  • Jerome was initially cautious about being too restrictive but warmed to differentiating or weighting reviews from GM holders differently (e.g., separate GM rating vs. general rating).
  • GoldenShamrock suggested a “:check_box_with_check: Verified/Trusted Review” designation tied to GM NFT level, and proposed the review system could eventually assist or replace the dApp Endorsement process.
  • Jerome later acknowledged a valid counterpoint: new users who haven’t yet earned a GM NFT may still want to voice opinions about their user journey.
  • Rover stressed reviews should be scanned for malicious/offensive content.
  • Green Ambassador Challenge added the scoring system should incorporate a freshness/recency weighting — older reviews count less than recent ones. This prevents dApps from being permanently penalized for past performance and rewards genuine improvement over time.

Metrics

  • ZeLoop argued the proposal misses sustainability performance — a core promise of VeBetterDAO. He proposed adding a 20% sustainability assessment weighting (for apps with sustainability actions) and reducing other categories accordingly.
  • Jerome agreed the idea is powerful but expressed concern about defining objective, undeniable metrics for sustainability, inviting further discussion on the Discourse thread.
  • Green Ambassador Challenge suggested that activity level metrics could be enhanced to reflect the quality or value of user actions, not just volume — ensuring that apps driving fewer but higher-impact actions (such as purchasing a solar farm) are recognized on equal footing with apps that generate high transaction counts through simpler tasks.
  • Rover proposed a user-to-voter conversion ratio as an additional metric, measuring how many app users go on to vote for that app — indicating genuine user satisfaction.

UX & Presentation

  • morbidejs recommended replacing the word “users” with “wallets” throughout the proposal to be factually accurate, since one user can operate multiple wallets.
  • Rover strongly advocated for a simple 5-star rating as the default user-facing view, with detailed analytics accessible via an “expand” or “more details” option for advanced users. He warned the current app cards are too number-heavy and could overwhelm newcomers.
  • Jerome confirmed this is the plan: high-level baseline with drill-down detail.
  • Rover also suggested user ratings could enable AI-powered recommender systems (similar to Netflix) to suggest apps.
  • Rover highlighted that some dApps offer no explanation or landing page on desktop, leading to immediate “bounces” — reinforcing the need for profile completeness scoring.
  • Green Ambassador Challenge proposed auto-pulling updates from external sources like Twitter/X would be valuable — unclear if this was already included in the proposal.

Sorting & Discoverability

  • Genesis2021 proposed a “Plug & Play” sorting for new users, grouping dApps by ease of onboarding (open-and-use → extra steps → purchase needed → DeFi).
  • GoldenShamrock suggested time-based sorting tabs (Highest Rated Last 24h / Week / Month / All Time) and endorsed the Plug & Play default view for new wallets.
  • Jerome expressed interest in the Plug & Play concept and asked how crypto-knowledge levels (beginner/intermediate/advanced) could be assessed.

Badge System & Security Audits

  • morbidejs raised conflict-of-interest concerns about Vechain Foundation performing security audits that unlock dApp badges, citing Foundation financial sponsorship of dApps and Foundation employees being dApp founders. He questioned the Foundation’s qualifications and asked whether audits would be applied uniformly.
  • Jerome countered that the Foundation has no incentive to be permissive (reputational risk) and that the community could propose restricting specific auditors if audits are perceived as unfair.
  • Rover noted the badge system needs further detail on how badges are decided and awarded.
3 Likes

For this proposal, I suggest that we implement based on the initial scope and observe how this goes. As a second phase we can collect feedback and address while also implementing additional features. Here is a recap of the features that can be added:

Review System & GM NFT Gating

  • GM NFT-gated written reviews — add a specific badge to reviews from GM NFT holders, and give them a higher weight
  • “Verified/Trusted Review” badge — add a specific badge to reviewers that completed at least one action on the app
  • Recency/freshness weighting — give more weight to recent reviews, preventing apps from being permanently penalised for past performance and rewarding genuine improvement

Metrics & Scoring

  • Sustainability performance weighting — introduce a new criterion to reward app based on verifiable sustainability actions, reflecting VeBetterDAO’s core mission (methodology to be defined)
  • User-to-voter conversion ratio — track what percentage of an app’s users go on to vote for it, as a proxy for genuine user satisfaction

UX & Presentation

  • Auto-pull updates from external sources — automatically surface recent updates from Twitter/X or similar channels on a dApp’s profile page

Sorting & Discoverability

  • “Plug & Play” sort mode — group dApps by ease of onboarding (open-and-use → extra steps required → purchase needed → DeFi), defaulting to this view for new wallets
  • Time-based sorting tabs — add filters such as Highest Rated: Last 24h / Week / Month / All Time
  • Crypto knowledge level filter — allow users to self-identify as beginner/intermediate/advanced to surface the most relevant apps (methodology for assessment TBD)
2 Likes

Adding to the above:

  • Community Review Eligibility Controls: Introduce a minimum wallet age input to calculate the weight of a review, helping reduce spam and low-quality input.

  • Reviewer Transparency & Credibility System: Enable access to user review history and introduce a reviewer credibility score based on DAO activity, GM NFT status, and engagement at the time of review.

  • Audit Badge System: add auditor identity to the security audit badges to improve transparency and trust.

2 Likes