Here is a copy of my working kanban for Nectar and Nectar-Engine. I'm old-school, vim and a md file, nothing fancy, but it should give you an idea of what's in mind and what is done or in progress.
Nectar
To Do
- Branding and account chages in code
- Remove legacy python2 code
- Remove most unsupported steem code as the api's have verged too much
- Plan for potential backward-incompatible changes and how to manage them
- Add compatibility layer or warnings for deprecated functions
- Make Hive default in all methods
- Audit and update API endpoints for all Hive interfaces
- Create migration guide for users switching from beem to Nectar
- Update documentation with new branding and changes
- Get close to 100% coverage in tests
- Survey current users to understand their needs and pain points
In Progress
- Evaluate third-party dependencies for security updates or replacements
- Converting
tox.ini
->pyproject.toml
for tests and linting - Depreciated
datetime.utcnow()
todatetime.now(timezone.utc)
- Fixing stray
datetime
issues as they come up / remove pytz dependency - Find an update all the calls that make use of
tags
api - Finish setting up the cron job to run nightly for nectarflower/nodeupdates
- Finish working on the benchmarks to update the nectarflower metadata (hourly?)
Done
Makefile
to simplify most tasks- Replace legacy
setup.py
withpyproject.toml
- Updated dependency managment to use
uv
- Updated linting and formatting to use
ruff
and/orblack
/isort
Comment()
uses bridge api instead of tagsget_discussion_by_*()
methods now pull from bridge apiupdate_node()
uses the account metadata from nectarflower
Nectar Engine
To Do
- Branding and account chages in code
- Add missing methods for things like Liquidity Pools
In Progress
- Convert
tox.ini
->pyproject.toml
for tests and linting - Depreciated
datetime.utcnow()
todatetime.now(timezone.utc)
- Fixing stray
datetime
issues as they come up / remove pytz dependency
Done
- Replace legacy
setup.py
withpyproject.toml
- Updated dependency managment to use
uv
- Updated linting and formatting to use
ruff
and/orblack
/isort
find_token()
now picks up tokens after the 1000th token
As always,
Michael Garcia a.k.a. TheCrazyGM