Ahh, I finaly got it. Now I know what iSerialNumber is used for.
About that 128 bit sequence, how about encoding it in base64, then it'll only be 21 chars (or 42 bytes). Just mapping those 64 chars to actual UTF-16LE codepoints. Could be fun to map them into some "windings"-similar... or is that too juvenile?
If 21 is too long; just use some part of that sequence to get a shorter serial number. Given a reasonable small quantity of ATMEL chips, we should be able to find out what part of it has the most entropy. That should be unique enough for the max limit on units produced placed by the current source of VID/PID pairs. At least to make sure one customer doesn't find himself with two units with the same USB Serial Number.
[quote author="rsdio"] I do not understand what you are saying, if VID/PID/VER identifies hardware and firmware, then at least one of those must change during an upgrade.[/quote]
Yes, I was wrong about that, see my other post in OLS.
Do you have a copy of the USB DFU Specification, Revision 1.1? If so, look at the Appendix B, for the DFU File Suffix. There is no Serial Number there at all.
Yes, I've read it before. And going over it again after your post, it seems I had misunderstood it. I was wrong, the bcdDEV is for the firmware version number. But then I have to go back and figure out what the iSerialNumber is for...
Actually, DFU use this value to check if the supplied firmware runs on current hardware. That doesn't say it couldn't be used in such a way, but it is better to use the iSerialNumber string descriptor. That is what it is intended for.
Haven't read that exact document, but I have used their DFU functions on the AT89C5131 prop before. It is that function I want. I think it is nice to adhere to a standard giving everybody greater choice on what host loader program to use and what OS to run it on. If we manage to make a compliant DFU bootloader, we should be able to use pretty much any DFU host program to flash new firmware. The part about being able to reuse USB-code is just not to waste any of the scarce resources on the smaller PICs.
Also it is in the DFU protocol to check if supplied firmware is valid for the current hardware, through VID/PID and bcdVER. Making the discussion in another thread about using that last word for firmware version number a bit contra-productive. The firmware validity check will perhaps help lessen the problem with people trying to flash the "wrong" firmware, but not hindering the hard cores who are probably already in possession of some other form of PIC programmer.
DFU also supports flashing other memory. So it should be possible to add DFU function to the on-chip EEPROM or the bitstream of an attached FPGA.
Hmm, I'm getting more and more involved in this, even though my usb stack needs a lot more work :-)
The OTG part of either 24F... or 32... props sounds really fun. Then we could have something like a Mass Storage class serving the sd card if connected as a function to another host (giving easy access to hosted pages for the not so involved user) or it could serve pages from an attached USB memory if used as a host.
Maybe I dreaming in technicolor. I should go back to the finalising the device function only part of my stack first.
the only way for software negotiation to work is if the remote board has rechargeable battery backup - otherwise it would not have the power to negotiate in the first place
Your right of course, wasn't following through on my own train of thought.
Even if there's several ad-hoc solutions to PoE, why wouldn't we adhere to a standard, ieee 802.3af? The problem with which pair of pairs is carrying the supply could be solved with two graetz bridges (or eight diodes, or 16 to be absolutely sure even if someone is using a single pair to carry the power, we're not aiming for Gb ethernet that uses all four pairs). The real problem I see is that we could not use the connector with internal magnetics as in v1 and the wide input range power regulator (48V wall warts are hard to come by so input should be something like 9-60V tolerant). For PoE to work we need access to the centre tap of the primary side of the ethernet transformers. That we don't have with previous connector. But if that connector isn't available for v2 (it's gone from seeds depot) I would find it a nice, not need, feature. Imagine only the one cord going into your black box of TCP/IP awesomeness.
Hmm, that probably not it. In PP_MODE 0 both EVEN and ODD point to the same buffer descriptor, so the result is that I set the value twice. That shouldn't be any trouble, the code duplication should be removed in the optimization step in the compilation. To be verified...
Also, just commenting out those line wouldn't work. You have to use /* */ comments as those lines are part of a xmacro. And that last back-slash isn't supposed to be there, that would include the xmacro within itself and make an infinite preprocessor loop :-)
But I think the problem lies in the Data Toggle Synchronization (DTS) and it's enabler (DTSEN). Or it is just in the adaptation to the USART api in the cdc.c class file, that part was thrown together very quickly when the cdc class itself finally worked. Albeit, the working state of cdc is questionable as I haven't figured out the DTS.
There is a lot of debug printf on stderr (real hardware USART) that should show every decision taken in the state machine parsing of each packet. In my basement I have it captured by running it by my BP V2Go, but any 5V serial input device should suffice.
Power-Over-Ethernet would be nice. I'm not sure how it works, but there is something like 48V on one of the pairs in the rj45. Hopefully there's not any software negotiation. Then we could change the power regulator to accept that kind of input voltage and a diode so we dont send any current out on the network if it has another power source.
Perhaps if I joined in on the next dev batch of BPv4 boards (is it green or yellow label?), I too could help with massaging my own code into the 24F. Is there a next batch, and how do I go about obtaining it. Though I do not look forward soldering all those little legs.
At the very least, I will try to factor out the real low level stuff. That would probably make it easier to port.
This project implements USB CDC ACM aka. USB-serial on a 18F4550. As such it should run pretty much unaltered an the USB IR Toy. I haven't got one of them so I can't know for sure. Include search paths are probably not correct in the .mcp/.mcw, and processor setting and linker script are certainly not correct. All #pragma's in main.c should also be considered suspect if compiling it for any other board than mine. (Mine is a home built CREATE USB Interface http://www.create.ucsb.edu/~dano/CUI/).
The main program is running an echo server. All char's received are transmitted. Except for the very first one, for some reason I can't figure out at this time of night. If compiled with DEBUG setting (__DEBUG macro defined) it will output some extensive info on the USART.
The cdc.[c|h] files implement an API that mimics the USART library that comes with MCC18. But, as the usb stack isn't implemented with interrupts yet and getcCDC is a blocking call, you can expect to halt your firmware using it unwisely.
You should also supply your own VID/PID pair. Remember to also change in the .inf-file. And on that subject, the .inf-file (and the .cat) is the result of blindly trial and error, so don't expect it to be correct in any way.
This is my first working attempt at a usb stack. My intentions are to develop a DFU class based bootloader and reuse the bootloader usb code as a library to the application. The API as it is now doesn't support that as I am very found of c macros and tries to do too much at compile time. So don't expect any future code from me to be backwards compatible. Rather expect to see something totally different.
Also, the code isn't very optimized, that will come after I have settled on an API. But if anyone takes a look at it, how would you prefer interacting with descriptors and messages; indexed byte arrays or structs? Is there perhaps some compiler optimization possibilities or portability issues here? With the current implementation, I wouldn't think the code is portable to a 16 bit platform without some work.
As for optimization, should I strive for code size or speed? How costly is a call/ret, goto/goto vs in-lining 2-3 loc c. How does it differ in interrupt context?
Is that really necessary? Sure, if we would like to fulfill the requirements of USB-OTG certification, but if we just want to use the OTG-function we must supply power to both the BusPirate and possibly the device connected to it (no supply required if the other device is another OTG).
My idea was to use an OTG-cable with "power injector", and that injected power would supply both BusPirate and other devices. Both devices would receive power through their respective USB-connector.
But this solution may not be satisfactory, we should check the specifications on which device should supply what voltage, power under what circumstances.
But to be able to make the BusPirate boards there could be two through-holes in the path of VUSB. If some special supply circuitry is needed, connection between those holes could be severed and the circuitry installed in place, perhaps as a daughter-board add-on. Or a single pin in one of them and supplying everything with an external 5V al'a my idea with injector-cable.
I just had a thought. The usb connector on the board actually has a footprint of mini-A/B. Couldn't we then route the ID pin of the connector to the unused RP16/USBID pin of the PIC? All other facilities i.e Vbus sensor/detector is already in place except for supply. The supply could be taken care of by a "hacked" usb-otg cable. Then there would be a future possibility of USB-OTG. That could be used for rrreeeaaallllllyyy long BASIC scripts via external hard drive :-) Or connect a hub, keyboard and the usb lcd backpack developed in another thread and then we have a handheld field testing device. Firmware wanting of course.
But I would think it'll make the v4 hardware even more future proof.
Yes I've seen it, but that thread was running a bit stale and I had just had some success when I read this thread and couldn't insist to reply with some comfort for the distraught minds over the microchip libraries.
But as soon as I have a working cdc-acm class implementation I'll post it up there to get some feedback on my decisions regarding api-abstractions, which aren't working for my plan to use the stack as a preloaded lib for use by both the boot-loader and the app (via linker script). Right now too much is hard coded and implemented at compile time. (I have a serious c preprocessor macro love)
[quote author="tayken"] Actually there are some open source projects being developed because of this situation. [/quote]
I got a little teaser, I just got my own usb stack running on pic18f4550. Only enumeration right now, I need to implement a class driver as well. I have started looking at cdc-acm (usbserial). And the api is in fast flux so I'm not ready to share just yet.
But probably soon I'll share a first version. Intention is CC-0 as I'm doing it just to challenge myself. The goal is to implement a bootloader based on the dfu class.