Skip to content
Snippets Groups Projects
  1. Mar 23, 2022
    • vlorentz's avatar
      Add support for author=None and committer=None · 3eff720a
      vlorentz authored
      committer=None happens on some malformed commits generated by old dgit
      version; and it is possible for author=None to happen for the same reason.
      
      For now, this is not supported by swh-model, so tests temporarily
      disable attrs checks that swh-model relies on.
      3eff720a
  2. Mar 22, 2022
    • Antoine Lambert's avatar
      pytest: Exclude build directory for tests discovery · 92c78ab5
      Antoine Lambert authored
      Due to test modules being copied in subdirectories of the
      build directory by setuptools, it makes pytest fail by raising
      ImportPathMismatchError exceptions when invoked from root
      directory of the module.
      
      So ignore the build folder to discover tests.
      92c78ab5
  3. Mar 16, 2022
  4. Mar 15, 2022
  5. Mar 11, 2022
  6. Mar 08, 2022
  7. Mar 02, 2022
    • vlorentz's avatar
      Move metrics handling from backends to RPC server · 284a4ab3
      vlorentz authored
      Motivation: replaces code duplication in the backends with a single one,
      to be consistent with the objstorage (which has many more backends)
      
      This also fixes the issue of metrics from 'extid_add' to be missing
      when using the postgresql storage.
      284a4ab3
  8. Feb 24, 2022
    • David Douard's avatar
      Update for swh.core 2.0.0 · 215162b2
      David Douard authored
      - Add expected entry points for swh.core 2 db handling new features:
      
        - add a ``swh.storge.get_datastore()`` function
        - add ``swh.storage.postgreql.storage.Storage.get_current_version()`` method
        - move sql migration scripts in ``swh/storage/sql/upgrades``
        - modify sql initialization scripts to match swh.core 2 (remove
          dbversion management code).
      
      - Update tests to use the new template-based database handling; this
        should have only minimal impact on test execution performances.
    • David Douard's avatar
      Add types-toml to requirements-test.txt · 386fb4d6
      David Douard authored
      386fb4d6
  9. Feb 10, 2022
  10. Feb 04, 2022
  11. Feb 01, 2022
  12. Jan 31, 2022
  13. Jan 25, 2022
  14. Jan 21, 2022
  15. Jan 18, 2022
  16. Jan 12, 2022
  17. Jan 06, 2022
  18. Jan 04, 2022
  19. Dec 22, 2021
  20. Dec 16, 2021
  21. Dec 15, 2021
  22. Dec 13, 2021
    • vlorentz's avatar
      postgresql: Fix one-by-one error in db_to_date on negative dates · fb1b3a06
      vlorentz authored
      Using `int()` on `date.timestamp()` rounded it up (toward zero),
      but the semantics of `model.Timestamp` is that the actual time is
      `ts.seconds + ts.microseconds/1000000`, so all negative dates were
      shifted one second up.
      
      In particular, this causes dates from
      `1969-12-31T23:59:59.000001` to `1969-12-31T23:59:59.999999`
      (inclusive) to smash into dates from
      `1970-01-01T00:00:00.000001` to `1970-01-01T00:00:00.999999`,
      which is how I discovered the issue.
      fb1b3a06
  23. Dec 09, 2021
  24. Dec 08, 2021
  25. Dec 07, 2021
  26. Nov 09, 2021
    • David Douard's avatar
      Add support for a redis-based reporting for invalid mirrorred objects · 850a7553
      David Douard authored
      The idea is that we check the BaseModel validity at journal
      deserialization time so that we still have access to the raw object from
      kafka for complete reporting (object id plus raw message from kafka).
      
      This uses a new ModelObjectDeserializer class that is responsible for
      deserializing the kafka message (still using kafka_to_value) then
      immediately create the BaseModel object from that dict. Its `convert`
      method is then passed as `value_deserializer` argument of the
      `JournalClient`.
      
      Then, for each deserialized object from kafka, if it's a HashableObject,
      check its validity by comparing the computed hash with its id.
      
      If it's invalid, report the error in logs, and if configured, register the
      invalid object in via the `reporter` callback.
      
      In the cli code, a `Redis.set()` is used a such a callback (if configured).
      So it simply stores invalid objects using the object id a key (typically its
      swhid), and the raw kafka message value as value.
      
      Related to T3693.
Loading