Token Streaming to TTS
I learnt from Luiz’s demo that we can pass in a generator (streaming reply from LLM), to Google TTS GRPC API, which the TTS API will handle the sentence tokenizer. Latency went up. Very neat development of TTS.
Automation
As the time required to deploy ideas from code, is getting less and less, especially I’m trying cursor recently.
- We will be valued as the mind who look at problems and fathom solution
- We should time box ourselves on known tasks more tightly.
- We should spend more effort to understand concepts and designs.
- Trying ideas robustly and fast. Maybe automate aftermath documentation.
- Even if SDK adaptation is cheap from generaiton, we should keep good tracking of tooling
For today I hope I could
- Build more and argue less on meetings. Best debate skills only grow your own shackle.
- Wake up earlier with kid.
- Think less about EE policy and how to improve them, better dichotomy of control.
- Not bother wife with political topic like Project 2025, should have discussed more about long weekend plan.