Skip to content
Snippets Groups Projects

indexing-lister: Allow to define flush packet size

Prior to this commit, indexing lister instances were flushing every packet of 20. This can now be defined per sub classes.

For the bitbucket lister, as the number of repositories grew from 10 per page to 100 per page, that enlarged the time frame between flushes.

Depends on !378 (closed)

Test Plan

tox


Migrated from D1635 (view on Phabricator)

Merge request reports

Approved by

Closed by Phabricator Migration userPhabricator Migration user 5 years ago (Jun 26, 2019 9:19am UTC)

Merge details

  • The changes were not merged into generated-differential-D1635-target.

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
30 30 if per_page != DEFAULT_BITBUCKET_PAGE:
31 31 self.PATH_TEMPLATE = '%s&pagelen=%s' % (
32 32 self.PATH_TEMPLATE, per_page)
33 # to stay consistent with prior behavior (20 * 10 repositories then)
  • I'm confused by this comment. Prior behavior of what? (I can deduce IndexingLister because it's in the same diff, but it won't make sense afterward.) And why does the Bitbucket lister need to override this behavior?

  • Because i changed the packet size returned by the api from 10 repositories (too small) to 100 repositories (a tad better) for the bitbucket listing.

    so 2 iterations of 100 repositories, i already have the 200 repositories to flush in db. If i kept the original indexing lister, i would have changed the behavior to flush every 2000 repositories.

  • Please register or sign in to reply
  • Merge request was returned for changes

  • vlorentz
    vlorentz @vlorentz started a thread on the diff
  • 16 16
    17 17
    18 18 class IndexingLister(ListerBase):
    19 flush_packet_db = 20
  • Merge request was accepted

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading