Author 
Topic 

tbell
United Kingdom
5 Posts 
Posted  May 31 2019 : 16:58:40

Hi I am modelling a parallel plate capacitor filled with a dielectric medium, and varying permittivity of the dielectric to see how capacitance varies. My input file is as follows: * dielectric capacitor
C tplate.txt 1.0 0.0 0.0 1.1 + C tplate.txt 5.0 0.0 0.0 1.0 + C tplate_sides.txt 1.0 0.0 0.0 1.0 C bplate.txt 1.0 0.0 0.0 1.1 + C bplate.txt 5.0 0.0 0.0 1.0 + C bplate_sides.txt 1.0 0.0 0.0 1.1 D dielec_sides.txt 1.0 5.0 0.0 0.0 1.0 0.0 0.0 0.0 
I find that as I reduce the permittivity, the capacitance scales roughly linearly perm  cap (nF)  1.2  2.7 1.5  3.03 2.0  3.9 5.0  9.2 10.0  17 20.0  35
But if I set the relative permittivity to 1.0, I obtain a capacitance of 3.45nF, greater than the values for 1.2 and 1.5. This doesn't seem correct to me, I thought the capacitance should be smallest for no dielectric. Am I doing something wrong? My component files are here: * tplate
Q tplate 10.0 10.0 0.0 10.0 10.0 0.0 10.0 10.0 0.0 10.0 10.0 0.0
* tplate_sides
Q tplate 10.0 10.0 0.0 10.0 10.0 0.1 10.0 10.0 0.1 10.0 10.0 0.0 Q tplate 10.0 10.0 0.0 10.0 10.0 0.1 10.0 10.0 0.1 10.0 10.0 0.0 Q tplate 10.0 10.0 0.0 10.0 10.0 0.1 10.0 10.0 0.1 10.0 10.0 0.0 Q tplate 10.0 10.0 0.0 10.0 10.0 0.1 10.0 10.0 0.1 10.0 10.0 0.0
* dielec_sides
Q dielec_sides 10.0 10.0 0.0 10.0 10.0 2.0 10.0 10.0 2.0 10.0 10.0 0.0 Q dielec_sides 10.0 10.0 0.0 10.0 10.0 2.0 10.0 10.0 2.0 10.0 10.0 0.0 Q dielec_sides 10.0 10.0 0.0 10.0 10.0 2.0 10.0 10.0 2.0 10.0 10.0 0.0 Q dielec_sides 10.0 10.0 0.0 10.0 10.0 2.0 10.0 10.0 2.0 10.0 10.0 0.0
bplate is identical to tplate but differently named 

Enrico
410 Posts 
Posted  Jun 03 2019 : 16:01:43

Dear Tom,
I'm not able to replicate your issue. Actually FasterCap warns you about a dummy dielectric interface,and removes that. FastCap does not realize it is dummy, but I still get results in line with the expectations. Could you share the offending input file? (the one you attach for relative permittivity = 5.0 works, and changing to 1.0 gives the results I expected).
Best Regards, Enrico



tbell
United Kingdom
5 Posts 
Posted  Jun 03 2019 : 16:59:07

Dear Enrico,
When I change the permittivity to 1.1,
* dielectric capacitor
C tplate.txt 1.0 0.0 0.0 1.1 + C tplate.txt 1.1 0.0 0.0 1.0 + C tplate_sides.txt 1.0 0.0 0.0 1.0 C bplate.txt 1.0 0.0 0.0 1.1 + C bplate.txt 1.1 0.0 0.0 1.0 + C bplate_sides.txt 1.0 0.0 0.0 1.1 D dielec_sides.txt 1.0 1.1 0.0 0.0 1.0 0.0 0.0 0.0 
I obtain:
g1_tplate 3.01806e009 2.54257e009 g2_bplate 2.54291e009 3.02228e009
When it is set to 1.0 (dummy) or removed, I obtain:
g1_tplate 3.90724e009 3.45337e009 g2_bplate 3.45263e009 3.90729e009
So the capacitances have increased upon removing the dielectric, which seems unphysical. Did you obtain these same values? Best regards, Tom 


Enrico
410 Posts 
Posted  Jun 03 2019 : 17:31:45

Ok I understood your problem. I was running the simulations in manual mode, as if you know the level of refinement you want for a set of similar simulations, you can run the automatic mode only once, and use similar parameters for the others. So I ran the 5.0 case first, and reused the parameters for the others.
Now, all the dielectric cases force some additional large panels into the simulation, that cause further refinement. You see that from the iteration numbers (the overall ones, not the GMRES iterations).
In the 1.0 case, the dielectric sides are removed  the same of course if you remove them yourself. In this case you hit a local minimum during the iterations. In general, if you have small width ratios (as in your capacitor plates, that are large and thin, even if this is not an extreme ration), we advise to lower the d parameter. Going to 0.8 avoids the local minimum for this case.
Otherwise, you can do the same I did  run the simulation for 5.0 and then go in manual mode using the the m parameter that is reported at the last iteration ("Max Mesh relative refinement value: 0.000243206"). You actually can use the iteration N1 value, as at N the difference w.r.t. N1 is less than the tolerance you set (but you did not know that before running the simulation)
Full copy&paste of last iteration follows:
Iteration number #6 ***************************
***************************************
Increasing the geometric refinement..
Warning: dummy dielectricdielectric interface found in the input file, skipping
Refinement completed
Mesh refinement (m): 0.00024336
***************************************
Computing the links..
Number of panels after refinement: 8832
Number of links to be computed: 5643200
Done computing links
***************************************
Precond Type(s) (p): Twolevels, twolevels preconditioner dimension (ps): 1024
GMRES Iteration: 0 1 2 3 4 5 6 7 8 9
GMRES Iteration: 0 1 2 3 4 5 6 7 8 9
Capacitance matrix is:
Dimension 2 x 2
g1_tplate 2.6156e009 2.14825e009
g2_tplate 2.14825e009 2.61566e009
Weighted Frobenius norm of the difference between capacitance (auto option): 0.00962627
Solve statistics:
Number of input panels: 12 of which 12 conductors and 0 dielectric
Number of input panels to solver engine: 648
Number of panels after refinement: 8832
Number of potential estimates: 5325836
Number of links: 5652032 (uncompressed 78004224, compression ratio is 92.8%)
Max recursion level: 31
Max Mesh relative refinement value: 0.000243206
Iteration time: 8.564000s (0 days, 0 hours, 0 mins, 8 s)
Iteration allocated memory: 365131 kilobytes
Best Regards, Enrico



tbell
United Kingdom
5 Posts 
Posted  Jun 03 2019 : 18:31:04

Ok I see, thank you. I think this issue may have been affecting some of my other experiments as well, as I was getting outlying and inconsistent data points for some simulations that were running significantly faster than others.
In general, how can I tell if I am hitting these local minima?
Other than thin samples needing a reduced d parameter, are there any other situations that commonly arise that I should be aware of when selecting run parameters?
Tom 


Enrico
410 Posts 
Posted  Jun 04 2019 : 11:21:51

Dear Tom,
the main idea is that if you know your geometry, you can 'calibrate' the solution parameters as they will be similar from simulation to simulation. As a strategy, you may start with some automatic simulations, and see in how many solve iterations you converge, the number of GMRES iterations, and how the values change from solve iteration to solve iteration, while considering also the 'd' parameter in case you have high geometrical ratios.
One criteria you may apply in the 'calibration' is to run the same simulation lowering d, and/or look for the number of panels from one solve iteration to the next one. FasterCap forces to have at least +10% panels in subsequent solve iterations, but in general, a low increase in the number of panels may be a flag of local minimum. In case of doubts, you can manually run the simulation working on m (increasing it) and see if the result changes significatively.
Once you have 'calibrated' your solution, you may run subsequent simulations with similar m and d parameters, as well as the preconditioner of your choice  sometimes Jacobi works best, while the twolevel is better for hierarchical geometries but sometimes it may increase the number of iterations.
But, this is not a cure for long and thin triangles, as we discussed in another thread. The problem with long and thin triangles is that breaking them up does not remove the small angle; this requires a smoothing pass in the triangulation, keeping only the edges that were defining the actual contour, and not the ones that are created by the triangulation (this is a socalled constrained triangulation).
Best Regards, Enrico




Topic 


