台北市中正區段〇〇光委託:軟體開發需求
RD 工作
web 接案
Programmer 工作
您需要開發的項目是?
網站
後台管理系統
電腦應用程式
電腦遊戲
其他, We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware. This is not a chatbot project. This is not a website. This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability. Vision Create a single-node, fully offline AI system capable of: • Real-time doctor-avatar interaction • Low-latency speech-to-speech response • WebRTC-based video interface (Meet-like UX) • Modular LLM / STT / TTS architecture • Local knowledge ingestion & daily sync • Deployable on high-performance laptops or edge machines Phase 1 is a commercial MVP. Phase 2 expands into multi-clinic deployment and enterprise orchestration. ⸻ Technical Scope We are looking for engineers comfortable with: • Local LLM inference (7B–13B class models) • CUDA optimization & VRAM management • Quantization strategies • Streaming STT + TTS pipelines • WebRTC video + data channel integration • Audio-driven avatar rendering • Docker-based modular architecture • Offline RAG and versioned data ingestion This system must: • Run fully offline • Maintain ~1–2s response latency • Be modular and replaceable at each layer • Deliver full source code and deployment documentation ⸻ Who We Want Engineers who: • Have deployed local inference systems before • Understand performance bottlenecks • Can design scalable architecture from day one • Think in systems, not scripts This is an opportunity to build a foundational AI platform — not a contract task. All IP and derivative rights belong to 弛雅有限公司.
後台管理系統
電腦應用程式
電腦遊戲
其他, We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware. This is not a chatbot project. This is not a website. This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability. Vision Create a single-node, fully offline AI system capable of: • Real-time doctor-avatar interaction • Low-latency speech-to-speech response • WebRTC-based video interface (Meet-like UX) • Modular LLM / STT / TTS architecture • Local knowledge ingestion & daily sync • Deployable on high-performance laptops or edge machines Phase 1 is a commercial MVP. Phase 2 expands into multi-clinic deployment and enterprise orchestration. ⸻ Technical Scope We are looking for engineers comfortable with: • Local LLM inference (7B–13B class models) • CUDA optimization & VRAM management • Quantization strategies • Streaming STT + TTS pipelines • WebRTC video + data channel integration • Audio-driven avatar rendering • Docker-based modular architecture • Offline RAG and versioned data ingestion This system must: • Run fully offline • Maintain ~1–2s response latency • Be modular and replaceable at each layer • Deliver full source code and deployment documentation ⸻ Who We Want Engineers who: • Have deployed local inference systems before • Understand performance bottlenecks • Can design scalable architecture from day one • Think in systems, not scripts This is an opportunity to build a foundational AI platform — not a contract task. All IP and derivative rights belong to 弛雅有限公司.
您希望的電腦作業系統為何?
其他, We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware.
This is not a chatbot project.
This is not a website.
This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability.
Vision
Create a single-node, fully offline AI system capable of:
• Real-time doctor-avatar interaction
• Low-latency speech-to-speech response
• WebRTC-based video interface (Meet-like UX)
• Modular LLM / STT / TTS architecture
• Local knowledge ingestion & daily sync
• Deployable on high-performance laptops or edge machines
Phase 1 is a commercial MVP.
Phase 2 expands into multi-clinic deployment and enterprise orchestration.
⸻
Technical Scope
We are looking for engineers comfortable with:
• Local LLM inference (7B–13B class models)
• CUDA optimization & VRAM management
• Quantization strategies
• Streaming STT + TTS pipelines
• WebRTC video + data channel integration
• Audio-driven avatar rendering
• Docker-based modular architecture
• Offline RAG and versioned data ingestion
This system must:
• Run fully offline
• Maintain ~1–2s response latency
• Be modular and replaceable at each layer
• Deliver full source code and deployment documentation
⸻
Who We Want
Engineers who:
• Have deployed local inference systems before
• Understand performance bottlenecks
• Can design scalable architecture from day one
• Think in systems, not scripts
This is an opportunity to build a foundational AI platform — not a contract task.
All IP and derivative rights belong to 弛雅有限公司.
⸻
您希望使用哪種程式語言?
其他,
We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware.
This is not a chatbot project.
This is not a website.
This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability.
Vision
Create a single-node, fully offline AI system capable of:
• Real-time doctor-avatar interaction
• Low-latency speech-to-speech response
• WebRTC-based video interface (Meet-like UX)
• Modular LLM / STT / TTS architecture
• Local knowledge ingestion & daily sync
• Deployable on high-performance laptops or edge machines
Phase 1 is a commercial MVP.
Phase 2 expands into multi-clinic deployment and enterprise orchestration.
⸻
Technical Scope
We are looking for engineers comfortable with:
• Local LLM inference (7B–13B class models)
• CUDA optimization & VRAM management
• Quantization strategies
• Streaming STT + TTS pipelines
• WebRTC video + data channel integration
• Audio-driven avatar rendering
• Docker-based modular architecture
• Offline RAG and versioned data ingestion
This system must:
• Run fully offline
• Maintain ~1–2s response latency
• Be modular and replaceable at each layer
• Deliver full source code and deployment documentation
⸻
Who We Want
Engineers who:
• Have deployed local inference systems before
• Understand performance bottlenecks
• Can design scalable architecture from day one
• Think in systems, not scripts
This is an opportunity to build a foundational AI platform — not a contract task.
All IP and derivative rights belong to 弛雅有限公司.
⸻
您需要其他服務嗎?
其他, We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware.
This is not a chatbot project.
This is not a website.
This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability.
Vision
Create a single-node, fully offline AI system capable of:
• Real-time doctor-avatar interaction
• Low-latency speech-to-speech response
• WebRTC-based video interface (Meet-like UX)
• Modular LLM / STT / TTS architecture
• Local knowledge ingestion & daily sync
• Deployable on high-performance laptops or edge machines
Phase 1 is a commercial MVP.
Phase 2 expands into multi-clinic deployment and enterprise orchestration.
⸻
Technical Scope
We are looking for engineers comfortable with:
• Local LLM inference (7B–13B class models)
• CUDA optimization & VRAM management
• Quantization strategies
• Streaming STT + TTS pipelines
• WebRTC video + data channel integration
• Audio-driven avatar rendering
• Docker-based modular architecture
• Offline RAG and versioned data ingestion
This system must:
• Run fully offline
• Maintain ~1–2s response latency
• Be modular and replaceable at each layer
• Deliver full source code and deployment documentation
⸻
Who We Want
Engineers who:
• Have deployed local inference systems before
• Understand performance bottlenecks
• Can design scalable architecture from day one
• Think in systems, not scripts
This is an opportunity to build a foundational AI platform — not a contract task.
All IP and derivative rights belong to 弛雅有限公司.
⸻
您的項目目前狀態為?
其他, We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware.
This is not a chatbot project.
This is not a website.
This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability.
Vision
Create a single-node, fully offline AI system capable of:
• Real-time doctor-avatar interaction
• Low-latency speech-to-speech response
• WebRTC-based video interface (Meet-like UX)
• Modular LLM / STT / TTS architecture
• Local knowledge ingestion & daily sync
• Deployable on high-performance laptops or edge machines
Phase 1 is a commercial MVP.
Phase 2 expands into multi-clinic deployment and enterprise orchestration.
⸻
Technical Scope
We are looking for engineers comfortable with:
• Local LLM inference (7B–13B class models)
• CUDA optimization & VRAM management
• Quantization strategies
• Streaming STT + TTS pipelines
• WebRTC video + data channel integration
• Audio-driven avatar rendering
• Docker-based modular architecture
• Offline RAG and versioned data ingestion
This system must:
• Run fully offline
• Maintain ~1–2s response latency
• Be modular and replaceable at each layer
• Deliver full source code and deployment documentation
⸻
Who We Want
Engineers who:
• Have deployed local inference systems before
• Understand performance bottlenecks
• Can design scalable architecture from day one
• Think in systems, not scripts
This is an opportunity to build a foundational AI platform — not a contract task.
All IP and derivative rights belong to 弛雅有限公司.
⸻
[選填] 簡單說明軟體內容、設計目的、期望目標或其他所需要的設計細節
We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware.
This is not a chatbot project.
This is not a website.
This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability.
Vision
Create a single-node, fully offline AI system capable of:
• Real-time doctor-avatar interaction
• Low-latency speech-to-speech response
• WebRTC-based video interface (Meet-like UX)
• Modular LLM / STT / TTS architecture
• Local knowledge ingestion & daily sync
• Deployable on high-performance laptops or edge machines
Phase 1 is a commercial MVP.
Phase 2 expands into multi-clinic deployment and enterprise orchestration.
⸻
Technical Scope
We are looking for engineers comfortable with:
• Local LLM inference (7B–13B class models)
• CUDA optimization & VRAM management
• Quantization strategies
• Streaming STT + TTS pipelines
• WebRTC video + data channel integration
• Audio-driven avatar rendering
• Docker-based modular architecture
• Offline RAG and versioned data ingestion
This system must:
• Run fully offline
• Maintain ~1–2s response latency
• Be modular and replaceable at each layer
• Deliver full source code and deployment documentation
⸻
Who We Want
Engineers who:
• Have deployed local inference systems before
• Understand performance bottlenecks
• Can design scalable architecture from day one
• Think in systems, not scripts
This is an opportunity to build a foundational AI platform — not a contract task.
All IP and derivative rights belong to 弛雅有限公司.
交件類型為?
急件,
We are building an offline-first AI Digital Doctor Platform — a real-time interactive avatar system that runs entirely on local hardware.
This is not a chatbot project.
This is not a website.
This is a product-level AI system designed to operate without cloud dependency, with future SaaS scalability.
Vision
Create a single-node, fully offline AI system capable of:
• Real-time doctor-avatar interaction
• Low-latency speech-to-speech response
• WebRTC-based video interface (Meet-like UX)
• Modular LLM / STT / TTS architecture
• Local knowledge ingestion & daily sync
• Deployable on high-performance laptops or edge machines
Phase 1 is a commercial MVP.
Phase 2 expands into multi-clinic deployment and enterprise orchestration.
⸻
Technical Scope
We are looking for engineers comfortable with:
• Local LLM inference (7B–13B class models)
• CUDA optimization & VRAM management
• Quantization strategies
• Streaming STT + TTS pipelines
• WebRTC video + data channel integration
• Audio-driven avatar rendering
• Docker-based modular architecture
• Offline RAG and versioned data ingestion
This system must:
• Run fully offline
• Maintain ~1–2s response latency
• Be modular and replaceable at each layer
• Deliver full source code and deployment documentation
⸻
Who We Want
Engineers who:
• Have deployed local inference systems before
• Understand performance bottlenecks
• Can design scalable architecture from day one
• Think in systems, not scripts
This is an opportunity to build a foundational AI platform — not a contract task.
All IP and derivative rights belong to 弛雅有限公司.
您的預算大約為何?
五十到一百萬
您希望如何與專家合作? (可複選)
請專家到指定地點洽談
透過電話或網路進行
透過電話或網路進行
還有什麼需要注意的地方嗎?
沒有
您需要服務的地區為何?
台北市,中正區