Next time I am down in SEG wandering about looking for stuff to hack, I will look out for what probe-sets are available "off the shelf" and also I will get some pictures of the raw components, to give you guys what is available.
As for assembly there are a 'few tricks' for small volume assembly on such parts that can get the assembly costs very low.
I just wish I was more motivated by money ;-), because there are some real opportunities.
PS: just out of curiosity, why are you running the client as superuser?
I just like to live dangerously and tend to get bored easily...... Actually because the run.sh, does not have the correct privileges to execute and i'm too bloody lazy to change them.
As regards an 'installer' for OS X you do not need it, just correctly reference the libraries from within java's class loader, then dynamically load based on the application base folder. it is completely cross platform, I use a "props" file that contains the file names of libraries i want loading. On main class startup there is zero reference to external libraries, so no library errors, then just "reflect" the libraries in, it allows for graceful/ controllable failure if a library is missing
Why combine bit banging and block code? Why not just have one extra parameter that specifies the address, so the BP subroutine can write the address first and then read the entire block?
I'm proposing Separating the code, but again i am looking at it from the script side, also be aware that not all addresses are sent the same way, some chips have 8 bit addresses, other 16 bit, and I have seen some crap in China that is reverse endian!!.
That is why i suggested splitting the address section out, so that the script could "bit-bang" the address setup, then "high-speed" the block transfer
Quote
yeuK!!! site just will not let me do attachments Just embed a URL to the OEM data sheet instead of trying to put the whole file in the forum.
protected URL original manufacture has been bought out. however a secondary link!!
@hardcore: With your background and experience could you mail Seeed and explain them the issues with with the probes? You may persuade them to look into finding and offering a better/higher quality probe cable kit ...
Yep I am fully aware of the situation with DP and seeed, it is about 1 hour from where I currently live.
I suspect they are well aware of the issues, in 'SEG' (the area in Shenzhen where electronic components/ suppliers can be found en-mass) there are a full range of components, you get the price you want by choosing the 'quality' you want.
Which is why i wrote the upgrade procedure... then posted it on my website, because i have been down that road (well i was in a bar when i did the upgrade, which is like being up at 2am) :-)
Yes i understand the address pointer needs to be set first, but that could be accomplished by the user code in bit banging mode, just before passing to the BP routine to grab a block.
It would give more flexibility in device setup, since you would not need to know the size/format of the address and try to make your BP resident code multi purpose.,
I.E 1,2,3,4 byte address setup, would be handled in bit bang mode via the script, then call a function in the BP to do the block transfer, think of it more as customizable 'building blocks" that can be called in the BP for speed.
Quote
I searched around but couldn't find this. The part is on ocotapart, but the links are just junk distributors without datasheets.
sorry relevant pages for CAT24WC32/64 (32K/64K-Bit I2C Serial CMOS EEPROM) attached JPG!!:
Quote
Which part gave you the most problems? I'll work on documenting it better.
The main issue was which commands send a 'reply' (perhaps a table is the way to go)
I.E out.write(C_I2CREADBYTECMD); // these actually only writes 1 byte out.write(C_I2CACKCMD); // should this ACK be after the read below? (NO!! its acking the rd cmd)
don't actually return a condition code, the next byte back is the actual data from the chip, followed by a confirmation code.
whereas: out.write(C_I2CSTARTBIT); out.write(C_I2CMULTIBYTEWRITE + 0); out.write(C_CHIPACCESSREG | 1); // TODO chip address (reg read needs low bit set)
returns 3 bytes of confirmation, I think the issue is that i'm not as familiar as you are with the BP , as a result i tend to see the difficulties in understanding some parts of the manual.
possibly the way to implement "skip-n", would be to buffer content internally, just before it is sent via usb then have a "buffer" pointer which kicks in as it is sent over the usb.
Since currently the USARt is the bottle neck, you could add a lot more processing before it would be noticed.
so to re-cap , spool from the chip into round-robin ram, then have a round-robin pointer to feed the USB, at the start just load the "offset" into the pointer, automatically discarding the bits you do not want.
I really don't want to get too deep into this because I am likely will upset a lot of people.
I have spent a long time in Hong Kong/China in sourcing/design & manufacturing 'budget' products/electronics, and can say the probes supplied are 'not' value for money:
They are poorly assembled, poor material quality (metal is deliberately 'soft' so as to extend the life of the metal stamping machines) The micro-grippers are incorrectly manufactured, which causes anything gripped to slip out of the clip under the slightest movement. The "J" hooks are far too large and poorly stamped, again it appears incorrect material has been used for the metal parts.
Whilst it does eventually start under OS x , I think it needs a better startup method there are some errors during startup: (i never understand why people need a command line Bash script to start a java app under OS X), thats what manifest files are for.
sudo ./run.sh Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._api-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._base-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._client-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._i2c-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._logging-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._logicsniffer-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._measure-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._org.apache.felix.log-1.0.0.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._org.apache.felix.prefs-1.0.4.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._org.apache.felix.shell-1.4.2.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._org.apache.felix.shell.remote-1.0.4.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._org.rxtx-2.2.0-8.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._spi-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._state-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._test-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._uart-1.0.0-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file Auto-deploy install: org.osgi.framework.BundleException: Unable to cache bundle: file:/Volumes/EXCHANGE/logic_sniffer/ols-0.8.5/./plugins/._util-1.0.4-SNAPSHOT.jar - java.util.zip.ZipException: error in opening zip file [10/2/10 8:03:59 AM - INFO - nl.lxtreme.ols.client.Host]: LogicSniffer v0.8.5 started ...
This is the problem, many EPROM chips only need an initial address, then after that they auto increment and wrap the address pointer (along as they get an ACK) Other devices insist they get an address for each byte, thankfully they are rarer.
Personally i would say for a bulk read/write it should be up to the programmer to decide when to use it, I.E if the chip allows for a bulk read/write after a single self incrementing address then fine, if not then the programmer has to hand code and use the existing system.
Even if we have to hand code the address code between the script/BP/I2C, then call a BP routine that just issues read/write/ + ACK then performs a bulk transfer at the end, it would be a significant improvement, since we would loose 2 bytes overhead for each byte transferred.
Have a look at the data sheet for 24wc64j.pdf page 7 sequential read (sorry cannot upload it due to 400k limit)
Also finally on the BP manual , it may be more clear if we present the communication between the BP and the scrips as they do on page 8, working out how a BP communicates, really 'did my head in'
PS there is a bug in the forum code, if the file is too large there is no way to remove it!!!!
I think we are talking at cross purposes, we have a multilevel, nested communication system when dealing with the BP.
The USB errors between the BP and the computer are a private matter between the OS drivers, and would be handled at a higher level.
my discussion of error handling refers to the BP to I2C interface, during mass transfers where the computer is not involved until the conclusion.
As we stand now, for a mass I2C transfer of 8k (after the address setup) we need something like:
for (int loop = 0; loop <= 8191; loop++) { // always do full chip for now
// read byte,ack byte (0x04,0x06) (according to sample code in BP tree) out.write(C_I2CREADCMD); // these actually only writes 1 byte out.write(C_I2CACKCMD); // should this ACK be after the read below? (NO!! its acking the rd cmd) tmpByteGetStore = in.read(); // READ BYTE : TODO this is maybe unsafe and will lock up on error buffer.put((byte) tmpByteGetStore); // store it ........ in.read(); // TODO currently we throw the BP status byte away, we need to action this!! }
That means we need to send 2 bytes , get 1 byte, get 1 byte status, which is a 3 byte overhead for each byte transferred. giving us an execution time of 131056-131058ms under java (i have timed it repeatedly and it is accurate to within 3ms), adding in other java commands such as screen-update, array counting, byte analysis at the "......." point does not add a single ms to the timings, which means that any delay is either in the : RXTX library, the OS, the FTDI library, or in reality is the bottle neck between the FTDI/PIC chips and the max speed USART speed.
Therefore the ONLY way to get this timing down , is to move the byte communication locally to the BP and the I2C device where a block transfer can occur at the full I2C speed then , do a status check for that block between the script and the BP, and a tight loop get for the block transfer, saving us 3 bytes for each time round loop.
Yes it is just standard I2C, many of the IC have 'auto' increment on the addresses, so that after reading a byte and 'acking' it the next byte is ready.
Also I would for-see a need to some sort of error reporting , if the BP does the Bulk' , then at the end of the operation , be able to query the BP for a single error code, that covers the bulk operation, I.E good/bad
[quote author="jawi"] @hardcore, rsdio: good remark. If I'm ready to implement the export-to-image functionality I'll take into consideration to provide as many image-formats as Java supports (which includes at least JPEG and PNG by default, not sure about TIFF). [/quote]
Personally I would not beat myself up over multiple formats, its better to spend time on bug fixes and signal functionality, (there is a standard format to export waveforms in, that allows other viewers to manipulate the signals)
I think one lossless export format for graphics is fine, anything else can be provided by packages that specialize in graphic format manipulation. if you want to get really clever then SVG, since it would allow individual editing of signal/timebase/scale and gridlines as separate sections of the image, plus it is scalable, but for this I think PNG or some other lossless system is fine.
On the generate image, it may be better to do PNG /TIFF (yes they are 'bigger' but they produce verbatim results that can be digitally edited). This then allows the user to re-encode the capture with the correct resolution and Quality for their own use (websites etc)
If you encode as JPG, the algorithms vary for each platform and in some cases produce poor results when re-encoded a second time as a jpg. I.E a JPG of a JPG will not have the quality of the original, this may be a problem with stamping external text over the jpg, the anti-aliasing gets really messy and the JPG has problems compressing down correctly. This is not an issue with a PNG/TIFF which after editing goes to a jpg in a single step making the anti-aliasing cleaner. (also the java JPG library is not very good)