AI hardware accelerators, including FPGA, and other emerging ASICs like Google TPU, AMD AIE, AWS Trainium, etc. This note does not include details about the software stack, but only focuses on the hardware architecture and its design choices.
This is a short note on networking functions on FPGA, as I am implementing some of them on FPGA, such as fast packet classification, flow table, network intrusion detection, etc.
Topics on vision and language models, including classic DNN-based image classification, visual question answering, visual object tracking, transformer-based LLM, and their potential applications such as AI agents or robots.
This is not my first post, but it has been quite a while since I lastly posted anything publicly. The moonlight today is nice, after a long drive back and forth from Waterloo.
Today is not special, but I happened to run an amazing blog from Josh Johnson, and felt like I should start writing again. I am not writing to inspire anyone; the only wish is that when I look back maybe before I die, I can still have a little anchor to the past.
This is an idea I have been thinking about for a while. The idea is basically to let any user write their custom needs in natural language, and then use LLM to synthesize the GUI software. LLM itself wouldn’t be able to generate realistic cross-platform GUI software, as the existing software development tools are too complicated for LLM to generate in a few shots. However, LLM can be used to generate a DSL, which can then be used to generate the GUI software.